As we should now be able to follow any jump present in the code, it is now time to make analysis more automatic. My target tool for that purpose will be objdump. However, we still have firmware image as raw dump of memory. To be able to use objdump easily, we need to pack our firmware into some container understandable by objdump. Most obvious choice is ELF (Executable and Linkable Format) and this is what I am going to use.
For the purpose of packing data into ELF, I’ve made Python library that makes it easier. For now, it is able to split firmware image into sections, like .text or .data, so objdump will be able to disassembly only the parts of firmware that are in fact a code. Moreover, it can define symbols inside the binary, so it is possible to store information, where certain functions starts and ends, same for any variables, like strings. As of now, there is no CLI interface for the program. If it turns out that such interface is necessary (like for addition of many symbols), it will be added.
Library code can be downloaded from Github. Currently, any LKV373A-specific modifications to this library is stored on branch lkv373a, to not rubbish main – master branch. Throughout this tutorial, I assume, we are using code on this branch, so there might be some LKV373A-specifics, especially regarding enum types (i.e. processor architecture enum).
At this point, I need to warn, that I am not going to describe internal structure of ELF file, nor any features that might be visible from outside, like sections concept, so if you are not familiar with them, it is good time to learn about them, as it might be very difficult to understand, what I am writing about. There are many good resources explaining them. Ones I was using are: this blog post and this documentation.
Creating new ELF
Example code that creates brand new ELF file is as easy as:
This, at first does all necessary imports, then creates new ELF object in line 6, and, finally, converts it to bytes object and immediately writes to file descriptor. That’s it!
After this, you should get valid, empty ELF file for architecture called lkv373a, which, obviously does not exists and no other program know how to handle, but we are going to change that in future.
While creating ELF object, few things can be defined, in addition to architecture id. They are all described in documentation, I will mention near the end of this tutorial. You are also free to dig in structure of ELF object. There is no encapsulation in it and structure validation is very permissive, so even completely broken ELFs could be produced, if needed.
Adding section
Next step is to add some sections to our ELF file.
At first, I am extracting them from firmware image and then inserting them to ELF object. append_section is a handy wrapper to low-level modifications that must be done on ELF structure, hidden under what we can see as ELF instance (these low-level structures are, however still available to the user as ELF.Elf member).
Modifying section attributes
Ok, so now we have sections in our ELF file, ready to save to disk. Before that, one thing can yet be done: setting proper attributes. They tell readers, if program is able to write or execute sections of memory, among other features, I am going to ignore here. This might be useful, as some readers might be confused about what is code (text) and what is data. In our case, we have two text sections (.irq and .text), so we are going to set them executable flag (SHF_EXECINSTR). Furthermore, we will set SHF_ALLOC flag for any section that is going to be loaded into memory (so all of them).
Segments are another concept, existing beside sections. They are stored in program header of ELF file and are somehow linked to section data. They allow to define another set of attributes to areas in memory. I don’t think they will be required to define, to perform analysis in objdump, but since at least one such program header, defining segment must exist in ELF file of type executable, there is interface similar to this for sections.
To define new segment, based on .text section, you can issue:
elf.append_segment(txt_id, flags='rx')
This also marks the segment as read and executable, but not writable.
Loading existing ELF
Loading existing ELF can be easily done from file with:
newelf, b = ELF.from_file('lkv373a.fw.elf')
Alternatively, it can also be loaded from bytes object:
fd = os.open('some.elf', os.O_RDONLY)
b = os.read(fd, 0xffff)
os.close(fd)
manualelf, b = Elf32.from_bytes(b)
In latter case, I assumed that os library is already imported into python.
Adding a symbol
This is very useful for making analysis of code. New symbol can be added using calls like:
First call defines function of length 0x44 in .irq section. To do this, ID of .irq section must be known. Luckily, we want to add symbol at the beginning of the section, so as offset, 0 was provided.
In the second case, we also want to define a function, but now we only know absolute address of the function (0x9b9f8), but what we need to pass is offset in .text section. To achieve this, we need to subtract address of the start of .text section (0x7d100).
In the last example, we define a string as an object of certain address and length. Both address and length are computed by subtracting absolute addresses. This symbol will be marked as local, which is default behavior for append_symbol function.
Library documentation
There are many more things possible to do using makeelf library. What I showed here is mostly, what is possible using high-level wrappers, doing many things under the hood. But as there is also low-level interface, virtually anything is possible.
To make exploring interfaces easier, I’ve made doxygen documentation for most of the library. It can be found on my server, here. Feel free to use the library for anything you want.
Conclusion
The library presented here should allow us make one step further to easy to use reverse engineering environment. It will by the way allow to store new findings in easily-modifiable Python scripts.
What I showed in examples to library interfaces split LKV373A firmware image into 4 sections. At this moment I already know that there are at least 6 sections, where code and data are in two parts (forming ICDCDS layout, where I-irq, C-code and so on). Also there should be some more symbols possible to place at this moment.
If I succeed in porting objdump, or any other tool able to disassemble ELF file, next step would be to publish Python script, utilizing library presented here, that annotates LKV373A firmware. So stay tuned, I hope there will be many further interesting findings throughout this reversing process!
As I wrote in previous part, my choice is in fact reverse engineering instruction set. The goal of this post is not to reverse-engineer whole instruction set, because even in RISC architectures some of the instructions might be quite rarely used.
Before starting the actual reverse engineering, place where such analysis would be easiest should be identified. Then it might be possible to use often repeating patterns to guess instructions that the pattern consists of. The result of this tutorial will be description of opcodes related to jumping and few other often used opcodes, mostly related to memory operations.
Identifying target
As written above, first step is to find good target. As was shown in previous article, it is possible to find references to constants mixed into code.
In the picture on the right one especially interesting information can be seen – it seems that as an operating system FreeRTOS was used. FreeRTOS is open source project, so its source code can be downloaded.
This information could be later possibly used to link compiled code with FreeRTOS source code. Let’s look into source code to find some useful location. As we already know the encoding of opcode for an operation on strings, finding some string in code might work. I was able to identify one such place in tasks.c file. It is shown on snippet below:
4110 if( ulStatsAsPercentage > 0UL )
4111 {
4112 #ifdef portLU_PRINTF_SPECIFIER_REQUIRED4113 {
4114 sprintf( pcWriteBuffer, "\t%lu\t\t%lu%%\r\n", pxTaskStatusArray[ x ].ulRunTimeCounter, ulStatsAsPercentage );
4115 }
4116 #else4117 {
4118 /* sizeof( int ) == sizeof( long ) so a smaller4119 printf() library can be used. */4120 sprintf( pcWriteBuffer, "\t%u\t\t%u%%\r\n", ( unsignedint ) pxTaskStatusArray[ x ].ulRunTimeCounter, ( unsignedint ) ulStatsAsPercentage );
4121 }
4122 #endif4123 }
4124 else4125 {
4126 /* If the percentage is zero here then the task has4127 consumed less than 1% of the total run time. */4128 #ifdef portLU_PRINTF_SPECIFIER_REQUIRED4129 {
4130 sprintf( pcWriteBuffer, "\t%lu\t\t<1%%\r\n", pxTaskStatusArray[ x ].ulRunTimeCounter );
4131 }
4132 #else4133 {
4134 /* sizeof( int ) == sizeof( long ) so a smaller4135 printf() library can be used. */4136 sprintf( pcWriteBuffer, "\t%u\t\t<1%%\r\n", ( unsignedint ) pxTaskStatusArray[ x ].ulRunTimeCounter );
4137 }
4138 #endif4139 }
4140 4141 pcWriteBuffer += strlen( pcWriteBuffer );
If we look at code before the snippet, we can see that the function is surrounded with ifdefs and is meant to be turned on only for demo purposes. Also searching for complete formatting string from sprintf function above fails on LKV373A firmware. Fortunately it is present in code and happens to have some modifications. One of them is the formatting string we were searching for. I was able to identify them starting at offset 0xbab2f. You can see it on hexdump. What is a bit surprising is that there are four such strings, while we expected only two in whole code. But "IDLE" string after them is confirming that it must be tasks.c module.
Now we can use method shown on previous tutorial about processor identification to find references to these strings. Finally I found usages of offsets 0xbab5c and 0xbab73 (marked in green and blue) near offset 0x91fd4.
At this moment, we have machine code and source code that is very likely to be compiled one to one into this machine code. We can also see here very useful side-effect of open source popularity: we have a system that has quite unusual function and is using open source software. So we can conclude that we can be almost sure that any random part of code also has open source software in itself.
Code patterns
As we already have quite reliable anchor for our analysis, we could try identifying more opcodes. But, to make things easier, I want to go the pattern matching way. Whichever architecture you analyze, you see some patterns that are same or almost the same on any architecture. This is especially true on RISC architectures, as they have very limited set of functions, so compiler have to join two or more instructions to get desired high level functionality. It this section, I will describe some of such patterns, I was able to identify and decode in LKV373A firmware. They are:
Function call
Function prologue and epilogue
Read-only memory access
Compare and jump
Variadic functions
Following is short description of the above patterns.
Function call
This is main element of ABI (Application Binary Interface) from the point of view of a programmer. Therefore it should also be well-known, even to people not involved in assembly programming or reverse-engineering. It is all about the method of passing arguments, before a call to a function.
Let’s see how such a call looks on MIPS architecture:
As we can see in case of MIPS, arguments are passed in registers named a0, a1, a2 and so on. Then address of function to call is loaded to t9 and jalr (jump and link register) is performed. Usually, in case of ABI, where arguments are passed in registers, when number of arguments is greater than number of such registers, they are passed through memory (i.e. stack).
Function prologue and epilogue
When performing a call to different function, state of the processor have to be preserved, so after the call it is again the same as before (with exception of few registers, used e.g. for return value passing). This operation may be done by caller or callee. Let’s look again at MIPS code to see how it works there:
I think there is nothing special to comment here. The opposite happens on function epilogue and additionally, immediately after that return instruction should appear.
Read-only memory access
This one is already partially analyzed on previous part of this series. It is usually appearing where some strings need to be used in code. Strings written in code as literals are stored in part of the memory where they shouldn’t be modified. Then theirs addresses are computed using some base register, or directly if code is not relocatable. This is how it works on MIPS:
lwt9,-30404(gp)
But, as we can see on previous post, we have something missing on our mysterious architecture. There, address was computed as $3=$3+0xbadadd. So, we should expect that register used should be set to something before that.
Compare and jump
This is after calling conventions, another extremely popular scheme. It usually consists of two opcodes. At first two numbers are compared, and some flags in processor are set. Then based on the state of one of the flags, jump is performed, or processing is continued if flag has value different than we expect. This time, let’s see how it works on x86 platform, as MIPS uses a bit different philosophy:
cmp [ebp+arg_4], eaxjnzloc_404CEB
On x86 cmp instruction causes the subtraction of two parameters, without actually storing the result, but only updating the FLAGS register, so it is known if the value is zero, or the overflow happened, and so on. Then based on flag value (in this case if zero flag is not set), jump occurs, or not.
Variadic function
Variadic function is function that can get variable number of parameters. The most popular example of such function is printf. It accepts format string and parameters, which number depend on format string. On system, where parameters are passed through registers, I expect it to get format through register and rest of parameters through stack or dedicated structure, so generally memory. Once we know how constants are accessed, it should be quite easy to identify, as it most likely will get format string as one of the first parameters, and somewhere close to that parameters should be stored, one after another.
Identifying patterns
Now, as we know what pattern we will look for, it is time to find them in code and guess functions of particular opcodes.
Read-only memory access
Let’s look at area near reference to our format string:
After splitting instruction words into parts and decoding opcode, register and immediate values, we can see that string’s address is based on register $4 and stored also in register $4. If we look few lines upwards, we can see that register $4 is computed based on register $0 and offset 0x0b (marked in orange). Register $0 is often used to always store value 0. Now, if we look at original offset of string in firmware, we can see that it is 0xbab5c! So that instruction must store immediate value in register’s higher half. Therefore we just guessed function of opcode 06. Later this opcode will be described as lh (load high).
By the way we also discovered that almost surely firmware image is mapped to address 0 after loading to operational memory or more likely mapping EEPROM to address space.
Compare and jump
In the snippet above, another one thing is quite interesting. And weird at the same time. There are few instructions that seem to not contain any register encoded and have weird uneven offsets, often with quite low values. At this moment my theory is that shorter ones are some kind of jumps (like 0x91fd8) and longer ones are function calls (like 0x91fd0). Then it is time to try to find compare and jump pattern.
If we are right, then opcodes like 00, 04 are jumps and 01 means a call. After this section we should also tell conditional and unconditional jumps apart.
Ok, so we now need to go back to source code and find some good candidate for conditional jump. It should be as close to format string as possible. If we look at the snippet from Identifying target section, we can see one such check on line 4110. It checks for value being greater than zero. Going upwards a little bit, we encounter one 04 opcode, immediately preceded by 2F instruction:
Now, if we look at occurrences of 2F opcode, we can spot that it is appearing usually near 04 opcode. However we cannot tell that they appear in exactly this order which is quite weird. On the other hand if we look at register this particular occurrence uses, it is quite likely it is compare opcode.
If we assume that 2F is cmp (compare) and 04 is jg (jump greater), we see that this more or less matches behavior we expect from the code immediately preceding sprintf from FreeRTOS source code.
However we still miss one information: what does the offset mean. If it is jump instruction, then we cannot jump 10 bytes ahead, because we would land in the middle of instruction. If we look again at source code, we can see that our jump should not go very far forward, so value should also not be too high. We can also exclude usage of register as address, because it would be register $10, which is not set anywhere near jump.
Having no other idea, I did an experiment. I multiplied jump value by 4, because length of instruction is always 4 and added next instruction address to result. Then I checked what is there and… bingo! It jumped above the sprintf call and ended up immediately after it. Some time later, I discovered that it is not completely truth. It happens that real formula is:
addr = imm * 4 + PC
Where imm is instruction argument and PC is program counter before executing the instruction (so address of jump opcode).
The question still is how more sophisticated compares are performed, because every one I’ve seen is just telling which value is greater. As there does not seem to be any flag in instruction, maybe there is no other option and to do that some arithmetic operation must be done to bring them to greater than operation?
Function call
Another thing, we can learn near the sprintf function is how parameters are passed to function. Signature for sprintf is:
int sprintf(char *str, constchar *format, ...);
So after analyzing its call, we should know how at least two first parameters are passed. Let’s see how it looks like in machine code:
Here, another interesting detail appears: last instruction setting the registers appears after the actual call. This is perfectly normal and can also be found on MIPS architecture. Its purpose is to allow concurrent execution of the two instructions.
Variadic functions
Now, if we scroll a bit upwards, we can see some interesting bunch of 35 opcodes. We know, that our call should have more than 2 parameters and thanks to format string we can tell that there should be exactly 5 extra parameters. Now if we count number of 35 opcodes, we see that these numbers match.
So, we can tell almost for sure that opcode 35 gets value of third register parameter and stores it at address computed as follows:
addr = reg1 + reg2 + imm
So e.g.
*($0 + $1 + 0x0c) = $26
Unfortunately with only that information, we can only guess the order of parameters, i.e. if $4 is first parameter or last one.
For future, we will denote opcode 35 as sw (store word).
Function prologue and epilogue
As we know exactly at which address the call will land, we can try to decode function prologue and epilogue, so what happens just after the call and just before returning back. Let’s see how such part of code looks, for example of sprintf function:
By the way immediately after sprintf there is strlen function, that is also called by function we are analyzing. So we see that registers stored with sw instructions are then recovered by opcode 21. Then we can safely assume it is reverse and denote it as lw (load word).
And we see that last jump in function is done by opcode 11, so we can denote it as ret (return). I still don’t know what is the meaning of its parameters. If we use standard decoding, it would be:
ret $0, $0, $9, 0x00
But I have no proof that it is the real meaning. I only see that sometimes this third hypothetical register have different value, but usually it is $9.
From my experience register saving, we see here is done to stack. If in this case it is also true, then we have two options for stack pointer: $1 and $31. Some more investigation must be done to tell which one is SP.
Other methods
We can also try to find constants other than strings. Then we have a chance, that there will be some arithmetic operation going on with them. Personally, I haven’t tried that approach, so I can’t show any example.
Another method might be finding references to some known structures. We can see one such structure in the function, we analyzed (TaskStatus_t). This is also left as an exercise to the reader.
Conclusion
Main focus of the analysis was on branching. As shown, we know quite a lot about not only branching, but also whole ABI. Now it should be possible, as soon as main entry of the system is found, to discover complete flow of the program.
We now know that first parameters are passed in registers $3, $4 and possibly so on. After analysis of function prologue and epilogue, we also know that here callee is responsible for preserving register values.
To sum up, we already know following instructions:
00: jmp off
01: call off
03: j? off
04: jg off
06: lh $r1, $r2, imm
11: ret
21: lw $r1, $r2, $r3, imm
2A: la $r1, $r2, imm
2F: cmp $r1, $r2
35: sw $r1, $r2, $r3, imm
We also know that in this architecture, there is mechanism of slots, identical with that in MIPS. Together with fixed-sized instructions and opcode and register field lengths, it is really similar to MIPS. Unfortunately it is not exactly the same, so reverse engineering of ISA have to be continued.
Unfortunately, after doing the research described here, I see that tools I used are not enough to do more reversing efficiently. So, before doing one step forward, I have to find a way to introduce more automation to the process. As soon as I succeed with this, I will write next part, so stay tuned!
In the first part, I identified two main problems for further development. First one is unidentified checksum, appended to the end of firmware image. Second one is unknown LZSS-like compression algorithm, used to compress machine code of application processor’s firmware.
Encoder firmware
The thing that till now was more or less unexplored is encoder firmware. LKV373A consists of two processors – application processor using the firmware analyzed in first part and encoder, which reversing I am going to push forward with this article.
Target frmware that I am working on is called LKV373A_TX_V3.0c_d_20161116_bin.bin and is obtainable from danman’s firmware collection.
At first, encoder firmware looks completely different than ITEPKG. The latter was completely structured. This one starts with some data fields, then there is big block of randomly looking data, interlaced with some strings. At the end is familiar SMEDIA02 structure. And this block of “random” data is our target.
First idea I had was running binwalk with -A switch to look for some known opcodes. With no luck. But if we look closer, it seems that this in fact is some kind of machine code.
Finding target
Ok, what we know now is that in firmware image there is a region with many strings written one after another, like on the first hexdump. In another place there is quite a lot data where some of the words are similar to the other. One such fragment can be seen on another hexdump.
So, we can guess, there is code region around 0x81680 and code region near 0xbbc70. Now the question is: how to prove it.
Before proving our hypothesis, one thing is worth noting. If we are right and bytes here are really machine code, then we can be almost sure that instructions are always (or almost always) encoded into 32-bit numbers. That fact is very useful, when we will try to find candidate architectures to test against our characteristics.
How to prove it?
Fortunately, we can see one interesting candidate string. There is some debug message marked in red on data region. If our guess is valid, we should find some reference to it and moreover we can expect there would be only one reference to that particular string.
But now, there is another problem. How to find reference to string, saved somewhere in memory, at offset we have no chance to guess? Now the experience with any machine code might be useful. Usually assembly mnemonics are translated more or less in a form they are written by programmer. So, if we have hypothetical instruction:
add $1,$2+0x1234
Chances are it will be translated to something like:
0x21 0x01 0x02 0x12 0x34
Where lengths of any of these fields and maximum offset possible to write depends on particular architecture.
We can make another assumption. Usually if persistent memory (like EPROM) finally is mapped to operational memory, usually nobody designs device the way, where start of some section is not at address, padded to i.e. page size. So, if we are lucky final address of string in memory should have same least significant bits as our firmware image.
Then, connecting the dots, we can try searching firmware for let’s say two least significant bytes of string offset (0xbcc8). One more remark here: we still don’t know endianness of the processor. And what is worse, after reading firmware format description we don’t know it even more. So we have to check both variants.
Of course, it might happen that in case of different firmware for completely different architecture, all that might fail. There is only one advice to succeed with such analysis: be creative!
The first impression is that I was wrong. There are in fact 5 hits. But… If we look at general firmware structure, we can see that most of the hits are inside SMEDIA container, which as I described in previous post about reverse engineering LKV373A consists of mostly compressed data, and unlikely has any uncompressed code. So, success! We have only one hit inside area we suspected to be code.
By the way we know more than the area is in fact code section. We see that numbers in machine code are stored as big endian. And as we can, we can use 16 bits as offset here. That further narrows number of possible architectures. For instance, popular ARM architecture is little endian, so it is not likely to be used in this case.
I’ll leave as an exercise verification that we are right on another strings present in this firmware.
Further guesswork
Next thing that might ease things a bit is finding out what is a whole format of instruction, that is:
how many bits are used for opcode?
how many operands we can use?
how many bits encode an operand?
When I was doing the work I was not experienced enough to see it from plain word we already have. Then my approach was to guess meaning of 0xA8 opcode we see. As we know that it points to string in possibly write-protected memory, we should not expect it to perform any arithmetic or logic operations. It is rather likely to get pointer of string and store somewhere. Since architecture gives us only 32 bits to use per instruction it is not likely to copy memory somewhere. For me most probable operation is computation of string address, based on some base register and storage in another register. Therefore, I will call the operation: “Load Address” (abbreviated “la“).
Because that information alone does not give us any more hints, but can be used to match our mysterious architecture to one of the known ones, it is high time to do even more guesswork.
Specification matching
Wikipedia has quite useful page, that might help us. But, at first it is good to check some popular ones to see if they match.
As an example, I will use MIPS architecture. It is often used in embedded systems and was quite popular in routers, especially in past, when they were not running Linux. Furthermore it is the most popular architecture that might be big endian, so seems like perfect first check.
Now, we can find some hints on Wikibooks. On MIPS there is instruction format that might match our characteristics.
I Format
opcode
rs
rt
imm
6 bits
5 bits
5 bits
16 bits
After search in MIPS manual (its name is “MIPS32™ Architecture For Programmers Volume II: The MIPS32™ Instruction Set” and is probably no more available on original source, but rather some random sites, so no link here), we can see that instruction that has first 6 bits of a word matching our target (0b101010) is “Store Word Left” (swl), so it is not really what we expected.
Now, we have to move on and check another architecture, and another, and another, until we decide we can’t find anything matching. Or finally we will find something. In my case the result is fail. I checked:
MIPS
ARM
ARC
RISC-V
PowerPC
SPARC
MicroBlaze
Xtensa
IBM S/390
Motorola 68k
DLX
Mico32
LEON
OpenRISC
NIOS II
m32r
And nothing matched. So architecture here is something really uncommon. Because the device is probably performing HDMI signal processing beside being normal processor, it is likely it is in fact soft processor, programmed into some FPGA. Therefore it is likely, finding the architecture documentation will be impossible.
Decoding instruction format
Then we can try another approach. Let’s play a little bit and try to decode rest of instruction using above MIPS I format. First 6 bits means instruction opcode (0x2A or 0b101010), next are two 5-bit fields meaning destination and source register, so we have register 3 for destination and register 3 for source. That definitely makes sense!
So, our hypothesis is that we have 6-bit opcode, 5-bit operands and 16-bit offset encoding. Here, again proving that fact is left as an exercise for the reader.
Going back to our target instruction, we now know that it is executing code like that:
$3 = $3 + 0xbcc8
We will denote this instruction as:
la $3,$3+0xbcc8
What we know?
Now, to sum things up, we learned following things about device’s architecture:
32-bit (4-byte) static instruction length
big endian
6-bit opcodes (maximum number of opcodes is 0x40)
5-bit operands (32 general purpose registers available)
indirect addressing of up to 65535 bytes (0xffff)
What next?
As we know quite a lot information about the architecture, we can choose one of the two ways: try to find out what is the name of the architecture and find its documentation (not likely to succeed) or reverse engineer as much instruction as we can. My choice probably will be the latter and if I will succeed in pushing knowledge forward, I will try to describe the process in next article in series.
Recently, I bought LKV373A which is advertised as HDMI extender through Cat5e/Cat6 cable. In fact it is quite cheap HDMI to UDP converter. Unfortunately its inner workings are still more or less unknown. Moreover by default it is transmitting 720p video and does not do HDCP unpacking, which is a pity, because it is not possible to capture signal from devices like cable/satellite TV STB devices. That is why I started some preparations to reverse engineer the thing.
Fortunately a few people were interested by the topic before (especially danman, who discovered second purpose for the device). To make things easier, I am gathering everything what is already known about the device. For that purpose I created project on Github, which is to be served as device’s wiki. Meanwhile I was also able to learn, how more or less firmware container is constructed. This should allow everyone to create custom firmware images as soon as one or two unknowns will be solved.
First one is method for creation of suspected checksum at the very end of firmware image. This would allow to make modifications to filesystem. Other thing is compression algorithm used to compress the program. For now, it should be possible to dissect the firmware into few separate fragments. Below I will describe what I already know about the firmware format.
ITEPKG
Whole image starts with magic bytes ITEPKG, so this is how I call outer container of the image. It allows to store data of few different formats. Most important is denoted by 0x03 type. It stores another data container, that is almost certainly storing machine code for bootloader, and another entity of same type that stores main OS code. This type is also probably storing memory address at which content will be stored after uploading to device. Second important entity is denoted by type 0x06 and means regular file. It is then stored internally on FAT12 partition on SPI flash. There is also directory entry (0x05), that together with files creates complete partition.
SMEDIA
Another data container mentioned on previous section is identifiable by magic SMEDIA. It consists of two main parts. Their lengths are stored at the very beginning of the header. First one is some kind of header and contains unknown data. Good news is that it is uncompressed. Second one is another container. Now the bad news is that it contains compressed data chunks.
SMAZ
This container’s function is to split data into chunks. One chunk has probably maximum length of 0x40000 bytes (uncompressed). Unfortunately after splitting, they are compressed using unknown algorithm, behaving similarly to LZSS and I have some previous experience with variant of LZSS, so if I say so it is very likely that it is true 🙂 . As for now, I reached the wall, but I hope, I’m gonna break it some time soon. Stay tuned!
NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 25th July 2017.
As I promised some time ago, now I am going to describe process of pre-personalization of a JCOP card. JCOP is one of the easier to get JavaCard-compatible cards. However they cost a bit. The problem with the ones available from eBay sellers is lack of pre-personalization. Ok, there are some advantages of buying not pre-personalizaed card, like ability to set most of its parameters, but by the way it is quite easy to make such card unusable.
Online resources, as I mentioned in the first part of the tutorial, are not very descriptive. They say that there is such thing like pre-personalization and it has to be done before using the card, flashing applets, using them and so on. There is only one source that helps a little bit. Someone has written script for the process. However there are two problem with the script. The first one is that it is written in some custom language and internet does not know about the interpreter, it is probably something provided by NXP – manufacturer of JCOP for its customers and neither me nor (probably) you, reader, are their customers. The consequence is that we can have script in custom language, with commands like ‘/select’ or ‘/send’. Fortunately, documentation of ISO 7816 (smart card connection), allows to decipher this. So this problem could be finally solved. Another problem is lack of command values and addresses in memory, so we do not know where and how to read/write/execute anything. After really deep search in Google, finally, I was able to find out all the missing values, so this tutorial could be written.
Process overview
Ok, after this way too long historical introduction, let’s see what will be needed. I assume, you are already able to communicate with your card using raw PDUs. If you don’t, up to this point there are quite a few resources to learn from, so I will not describe this. The most important thing here is to have so called transport key (KT). If you do not have it, go get it now. Seller should provide it to you, and if he did not, you are stuck, since the first step requires this key.
So, basically steps will be as follows:
Select root applet with Transport Key
Boot the card
Read/write some data
Protect the card
Fuse it
Easy? Easy. But only if you know some hex numbers. Ok, here, one big WARNING: the last step is irreversible and can be done by mistake quite easily, so think twice before sending anything, and if you are sure, that you are done, think twice again, before issuing it.
Pre-personalization, step by step
At first, we use Transport Key to select proper applet. Format of SELECT command is as below:
CLA=00 INS=A4 P1=04 P2=00 Lc=10 (...)
Where CLA is always zero, INS means SELECT, P1, according to ISO7816 means selection by DF name and Lc is length of KT. After that, key have to be appended. Of course, whole APDU is to be given to communications program as binary values or hex values only.
What now follows is specific to NXP cards only and is mostly undocumented publicly. First of such commands is BOOT command. Its format is as follows:
CLA=00 INS=F0 P1=00 P2=00
Now double care have to taken, because FUSE command should be available after this point and its APDU consists only of zeros, so every mistake might make the card unusable, since security keys are generated randomly for each card.
Reading memory
Now the most important values to read are called CM_KEY and GPIN in memory dump, I shared in the previous post on the topic. First one starts at offsets: 0xc00305, 0xc00321 and 0xc0033d and are 0x10 bytes long. The other one can be found at offset 0xc00412 and by default should be 5 bytes long. However maximum length is also 0x10, so it is better to make sure the length is really 5 by reading byte at offset 0xc00407. To sum up following commands need to be issued and results be saved for future use:
Where CLA + P1 + P2 is concatenated address of memory area to read, INS=B0 is read command and Lc contains length of data to read.
Writing data
Alternatively, it is possible to write custom values to these buffers. This is especially encouraged for users who want to use the card not only for testing. Overwriting the values could be done with following:
Where user data is filled with some random data of length in Lc field.
Required values
Beside securing keys, it is required to set CM_LIFECYCLE value to 0x01 and make sure all fields related to keys and PIN have proper values. Here, my memory dump can be used as reference, since I initialized the card before dumping the memory.
Finishing
After setting all the fields to desired values, there are two more steps to do. First one is issuing PROTECT command. It looks as below:
CLA=00 INS=10 P1=00 P2=00
And finally, sending FUSE command with:
CLA=00 INS=00 P1=00 P2=00
Here again, remember, that this command cannot be undone!
Well done! Your card should now be pre-personalized and ready to use, even in production environment. At the end, one remark: probably FUSE command does not need to be issued at all. However, if it is not issued, the card is completely insecure and should not be used in production.
NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 18th February 2017.
Recently I was asked to configure internet browser on a thing called Vasco Translator Premium 7″. The device looks exactly like many of the low-end Android tablets from China. And it happens to be one. The problem is that is was locked to used the only allowed application which is the translator. It has some minor functions like camera and it seems that was a mistake of its authors. They used default Android app as Camera and Gallery applications (and forgot to lock send message button in the latter 🙂 ).
At first I have to highlight the fact, this is not the full unlock or root of the device, but the fact of ease of the process allows me to suppose that rooting the thing should not be too difficult, too. Our goal here is to open up the web browser. Because as it seems the software below the shitty overlay is ordinary Android with all the apps on its place. And how useful might be the tablet without internet browser?
Prerequisites
The tool we will use to do the trick is good ol’ WAP Push protocol. For some reason in newer Android devices this dinosaur has been revived and supported out of the box. The goal is to send such message to our locked device, and since the gallery app mentioned above allows to escape to Messaging app, read it on it. This is probably the hardest part of the process. And possibly may require to buy some additional hardware (if you have access to any service, you know is sending WAP Push, you can use it and skip this part).
And that hardware is a GSM modem. It is highly possible that you already have one that can be used. The thing we will need is the possibility to send SMS through AT commands. Many Android phones allows that, at least if they are rooted, probably LTE/3G modems can do that too (not checked that personally). Ok, since the procedure to get access to AT interface is completely different for any device you can get, I have to leave you alone with getting used to that. After some time, you will probably end up in minicom or some similar program and parameters like 115200/8N1 or 9600/8N1. In my case (Android with Qualcomm processor) it is /dev/smd11 and params are 115200/8N1. Now you could type AT to check if the device you found is really an AT modem (should respond with OK) and AT+SCA? to check your SMS center address. You should be able to recognize it or Google it to check if it really belongs to your mobile operator.
Crafting SMS
Now, since we have all the tools, we can start crafting SMS. I will omit many details here since just general description of PDU format would take whole article and complete one is more than 100 pages long. The only part you need to know about is destination address. This will be the phone number of your device. Trailer of the message is WAP Push payload, which to be described will need another 100 or so pages, so skip it. As a remark, there is some program called wbxml2xml/xml2wbxml that allows to read/write the message. In our case, we want to enforce the device to visit Google.com, so this will be the address of WAP bookmark.
Ok, so on the picture above, thing we are interested in are [dest_addr] and [dest_len]. The first encodes telephone number “+37201234567” (note lack of ‘+’ sign), the second its length (as number of digits, 0x0b == 11). The number of your device should be placed here and you could move on to next section.
Or you can try customizing the payload. The important thing here is marked as [WBXML] and can be crafted with program mentioned before. After changing this, adjustment of [ud_len] value to number of bytes in payload (those after the length) is required.
Sending SMS
Since we already have modem, we need to type AT command to initiate message sending. But before that, we need to ensure that we are in binary mode. Type AT+CMGF? and, if value is other than zero, AT+CMGF=0. Now start sending with:
AT+CMGS=55
Where 55 is length of payload in bytes, but without SMSC header (one byte at the beginning). Modem should respond with > prompt, where SMS could be typed.
And after that press CTRL-Z (^Z) in your terminal. This should send SUB (substitute) to modem. It is important not to use any characters in between, like spaces and ENTER. After about a second, you should see that sending was successful and no error was returned.
Receiving and opening
Now, if you have your translator turned on, you should hear that new message was received, but nothing appeared on screen. That is ok. The rest of the procedure is shown on video below:
Postscriptum
After another few minutes of playing with the device I found another method of opening the browser and it is way faster than what was described below. But the first one was much more entertaining for me and is showing one of the many places where serious bugs could be found – forgotten technologies, still being implemented, possibly used, but with lack of knowledge about details in general public.
You can see the other method on video below, and possibly it is the one you want to use.
NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 8th February 2017.
Some time ago I was struggling with JCOP smart card. The one I received as it turned out was not pre-personalized, which means some interesting features (like setting encryption keys and PIN) was still unlocked. Because documentation and all the usual helpers (StackOverflow) were not very useful (well, ok, there was no publicly available documentation at all), I started very deep search on Google, which finished with full success. I was able to make dump of whole memory available during pre-personalization.
Since it is not something that could be found online, here you have screenshot of it, colored a bit with help of my hdcb program. Without documentation it might not be very useful, but in some emergency situation, maybe somebody will need it.
Small explanation: first address, I was able to read was 0xC000F0, first address with read error after configuration area was 0xC09600. I know that, despite of lack of privileges some data is placed there.
There are three configurations: cold start (0xc00123-0xc00145), warm start (0xc00146-0xc00168) and contactless (0xc00169-at least 0xc0016f). Description of coding of the individual fields is outside of the scope of this article. I hope, I will describe them in future.
Next time, I will try to describe the process of pre-personalization, that is making not pre-personalized card, easy to get from usual sources of cheap electronics, able to receive and run applets.
Update: Next part of this tutorial can be found under this link.
NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 12th September 2016.
Lately I have tried to download some file from a website to my Android smartphone. Simple thing, yeah? Well, not really. Unfortunately mobile browser developers removed many features from their mobile distributions. One of them is a possibility of downloading random page to disk as is. Instead (this is the case at least with Mozilla’s product) they are forcing “Download as PDF” feature. I had a bit of luck, because the file I was trying to download was MP4 movie, which is downloadable, maybe not in an intuitive way, but it is. But before I have found that feature hidden in a player’s context menu, I tried another solution – wget. Since I am great fan of terminals, I have busybox installed on my phone. Those of you, who know what exactly is busybox should know that this is set of lightweight variants of most standard UNIX tools. So, if they are lightweight, they had to cut some part of tool functionality, right? And in case of my busybox’s wget, they cut HTTPS support. And today, it is more likely to encounter site which is only HTTPS than one that is only HTTP, at least when talking about popular sites. So I had to get my own distribution of wget, that will not be such constrained one.
Not to get you bored too much, here you can find binary distribution of what I achieved to compile. It was compiled for ARMv7 platform using NDKr12b and API level 24 (Nougat), so it will probably not work on most of current Android phones, but if you read later, it is probably working on your device or even is outdated. If you are interested in recompiling binaries yourself, you can find detailed how-to in the next part of this article.
Dependencies
Before compiling wget itself, you have to have whole bunch of its dependencies. But at first, you of course need Android compiler. It is distributed as part of NDK and I won’t describe its installation here. Sources of every program compiled here can be grabbed from its official sites (list at the end of this post). The only exception is libtasn1, which required few hacks to be done to make it compile with Android bionic libc. Its source, ported to Android can be get from my github repo.
Let’s start with programs that does not depend on anything. For all projects, the procedure is more or less the same and can be described with simplified bash script:
tar -zxvf program-1.00.tar.gz
mkdir build
mkdir install
cd build
CC=arm-linux-androideabi-gcc AR=arm-linux-androideabi-ar RANLIB=arm-linux-androideabi-ranlib CFLAGS=-pie \
../program-1.00/configure --host=arm-linux--prefix=/data/local/root
make
make install DESTDIR=$(dirname `pwd`)/install/
cd ../install
tar -zcvf program.tar.gz *
gmp, libidn and libffi
For these three programs, the procedure above should work without any modification.
nettle
Since nettle depends on gmp, it has to be configured with paths to gmp binaries and headers in its CFLAGS and LDFLAGS variables. They should look like this:
This was the hardest part for me, but should go smoothly now. Script below should do the work correctly:
git clone git@github.com:v3l0c1r4pt0r/android_external_libtasn1.git
mkdir build
mkdir install
cd build
CC=arm-linux-androideabi-gcc AR=arm-linux-androideabi-ar RANLIB=arm-linux-androideabi-ranlib CFLAGS=-pie \
../libtasn1/configure --host=arm-linux--prefix=/data/local/root--disable-doc
make
make install DESTDIR=$(dirname `pwd`)/install/
cd ../install
tar -zcvf libtasn1.tar.gz
p11-kit
This is the last dependency of gnutls which is the only, but very important dependency of wget. Just embedding libtasn1 and libffi should do the job well.
Notice that libffi has no headers, so we add it just to CFLAGS here!
gnutls
This one was more complicated than the rest. As I mentioned above, it is very important to wget functionality. However wget’s dependency on it could probably be turned off, we would not have TLS support then. When compiling it I had some problems that seemed to be serious. There were a few errors while making it, so I had to call make twice and even though it failed. Despite that it seem to work after make install, which obviously failed too. In my case following script did the job:
mkdir build
mkdir install
cd build
CC=arm-linux-androideabi-gcc AR=arm-linux-androideabi-ar RANLIB=arm-linux-androideabi-ranlib \CFLAGS="-pie -I`pwd`/../../gmp/install/data/local/root/include -I`pwd`/../../nettle/install/data/local/root/include -I`pwd`/../../libtasn1/install/data/local/root/include -I`pwd`/../../libidn/install/data/local/root/include -I`pwd`/../../p11-kit/install/data/local/root/include"\LDFLAGS="-L`pwd`/../../gmp/install/data/local/root/lib -L`pwd`/../../nettle/install/data/local/root/lib -L`pwd`/../../libtasn1/install/data/local/root/lib -L`pwd`/../../libidn/install/data/local/root/lib -L`pwd`/../../p11-kit/install/data/local/root/lib"\
../gnutls-3.4.9/configure --host=arm-linux--prefix=/data/local/root--disable-cxx--disable-tools
make || make
make install DESTDIR=$(dirname `pwd`)/install/ || truecd ../install
tar -zcvf file.tar.gz *
Compilation
Since we should now have all dependencies compiled, we can try compiling wget itself. The procedure here is the same as with dependencies. We just have to pass path to gnutls. And then standard configure, make, make install should work. However if your NDK installation is fairly new and you were not hacking it before, you most likely don’t have <sys/fcntl.h> header and make should complain about that. Luckily Android itself have this header present, but for reason unknown it is kept in include directory directly. To make wget, and any other program that uses it, compile you can just point “sys/” instance to <fcntl.h> with symlink or do something like that:
where $TOOLCHAIN/sysroot is path at which you have your headers placed. Depending on tutorial you were using for making it work it may have different structure.
Installation
All commands I presented above implies that you have your custom-compiled binaries in “/data/local/root”. I made it that way to have clear separation between default and busybox binaries. If you want to have them somewhere else, you should pass it to configure scripts of all programs you are compiling. After successful compilation of all tools, I have made single tarball containing all compilation output (this file’s link was placed above). Its content can be installed into Android by typing
tar -zxvf wget-with-deps.tar.gz -C/
using adb shell or terminal emulator.
Sources
Below you can find links to sources of all programs nedded to follow this tutorial.
NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 10th February 2016.
As any observer of my projects spotted, most of the biggest projects I do involves binary file analysis. Currently I am working on another one that requires such analysis.
Unfortunately such analysis is not an easy task and anything that will ease this or speed it up is appreciated. Of course there are some tools that will make it a bit easier. One of them is hexdump. Even IDA Pro can make it easier a bit. Despite them I always felt that something is missing here. When creating xSDM and delz utils, I was using hexdump output with LibreOffice document to mark different structure members with different colors. It worked, but selecting 100-byte buffer line by line was just wasting precious time.
Solution
So why not automate whole process? What we really need here is just hexdump output and terminal emulator with color support. And that’s why I’ve made HDCB – HexDump Coloring Book. Basically it is just extension to bash scripting language. Goal was to create simple script that will hide as much of its internals from end-user and allow user to just start it will his shell using old good ./scriptname.ext and that’s it. HDCB is masked as if it was standalone scripting language. It uses shebang, known from bash or python scripts to let user shell know what interpreter to use (#!/usr/bin/env hdcb). Those who are python programmers should recognize usage of env binary.
In fact it is just simple extension to bash language, so users are still able to utilize any features available in bash. Main extensions are two commands: one (define) for defining variables and the other (use) for defining field or array of that defined type. Such scripts should be started with just one argument – file that is meant to be hexdumped and analyzed.
Internals
Bash scripts are just some kind of a cover of real program. Main core of the program is colour utility. It gets unlimited number of parameters grouped in groups of four. They are in order: offset of byte being colored, length of the field, background and foreground colors. As standard input, hexdump output (in fact only hexdump -C or hexdump -Cv are supported) is provided. Program colors the hexdump with rules provided as arguments. This architecture allows clever hacker to build that cover mentioned in virtually any programming language.
Downloads, documentation and more
As usual, program is available on my Github profile. Sources are provided on GPLv3 license so you are free to contribute to the project and you are strongly encouraged to do so or make proposals of a new functions. Program is meant to be expanded according to my future needs, but I will try to implement any good idea. Whole documentation, installation instructions and the like are also available on Github.
NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 6th November 2015.
Last year, I published a program for Microsoft Dreamspark’s SDC file decryption. Soon after that I wrote article about SDC file format and its analysis. Now it’s time to complete the description with newest data.
This article wouldn’t be written if not the contribution of GitHub’s user @halorhhr who spotted multi-file SDC container and let me know on project’s page. Thanks!
When writing that post year ago, I had no idea what multi-file container really looks like. Any suspicions could not then be confirmed, because it seemed that these files simply where not used in the wild. A days ago situation changed. I got a working sample of multi-file container so I was able to start its analysis.
Real container format
After quick analysis, I knew that I was wrong with my suspicions. Filename length and encrypted filename strings are not part of a file description. In fact they are placed after them and filename is concatenated string of all filenames (including trailing null-byte). So to sum up filename of n-th element starts at file[n].filename_offset and ends just like any other c-style string.
Whole header structure is like on the sample header on the right. Note that all headers beside 0xb3 one has been already decrypted for readability. In real header the only unencrypted field is header size at the very beginning of the file. 0xb3 sample has unencrypted header and header size is not present in a file. However file name is encrypted in some way, I haven’t figured out as of now. Encryption method is blowfish-compat (the difference between this and blowfish is ciphertext endiannes). Filenames are encrypted once again.
After header, all other data is XORed using key from EDV string and then deflated, so before reading them, you have to inflate and XOR again. Format of data in 0xb3 version is still unknown, however analysis of compressed and file size hints that it may be stored the same way. It is important to note that depending on file signature different configuration of deflater may be needed. It is now known that files older than 0xd1 header, which appears to be newest (because only this one supports files greater than 4 GiB) need to have deflater initialized with
This errata does not contain all information needed to support all variations of SDC files. Beside unknowns I mentioned above, there is another variation that uses 0xc4 signature and which I had no sample of. The only trace of its existence is condition in SDM code. Because of that I cannot write support for that type of file. There is also possibility of multi-file containers having 0xb5 or 0xb3 signatures existence. That type of files seems to appear only lately, but it is probable that it existed in the past. Because of having no samples of them I cannot verify that xSDM properly handles them.
So if you have sample of any of variations mentioned here, just send them to me at my email address: v3l0c1r4pt0r at gmail dot com or if you suppose it may be illegal in your country, just send me SDX link or any other hint for me how can I find them.
Other way?
Few days ago, after I started writing this post Github user @adiantek let me know in issues that there is a method to obey SDM in Dreamspark download process. To download plain, unencrypted file you just have to replace ‘dfc=1‘ to ‘sdm=0‘ in a link Dreamspark provided in SDX file. If it true that it works in every file Dreamspark provides, my xSDM project would be obsolete now. However, because Microsoft’s intentions when creating this backdoor (it seems to be created just for debugging) are unknown, I will continue to support the project and fix any future bugs I will be aware of. But now it seems that this project will start to be just proof of concept for curious hackers and will start to slowly die.
Nevertheless, if you have something that might help me or anyone who may be interested in SDC format in future, just let me know somehow, so it will be available somewhere on the internet.