Printing pictures like its 1873 using Oki 3321 dot-matrix printer

Steinway hall 1873
As wikipedia says oldest halftone image printed in a newspaper back in 1873

Long, long time ago, before prices of inkjet and laser printers fell to levels allowing home users to own and use them, there was a primitive printing technology called dot-matrix. As any technology of the past, it is not competitive anymore. However it still has few advantages and one of them is reliability of these devices. Some time ago I found quite a cheap Oki 3321 printer that has 9 pin head and is capable of printing on A3 paper in portrait orientation. Usual mode of printing for these devices was simple text mode, where you just were writing your text in ASCII (or any weird coding popular in your country of origin) to its parallel port. Fortunately these printers usually had also graphic mode, where you could fully use capabilities of the device.

I already was experimenting some time with my device, so I already know it uses Mazovia variant (with zł as single glyph) as its codepage. I was also able to guess how to switch into graphic mode, so in theory I was able to print images for some time. Unfortunately any CUPS drivers I used did not provide acceptable results, so all I could do was to write some support tool myself. Continue reading “Printing pictures like its 1873 using Oki 3321 dot-matrix printer”

mhz14a – program for managing MH-Z14/MH-Z14A CO2 sensors via UART

MH-Z14A CO2 sensor

When I have seen CO2 sensor for the first time, it was quite expensive device. Well, if one want to buy consumer device these days, it still could cost a lot. However in the days of cheap Chinese electronics sellers on biggest auction platforms, for makers, situation is quite different now. MH-Z14 is the cheapest CO2 sensor I was able to find. I costs about $19 and comes in few variants: MH-Z14 and MH-Z14A. Also it can measure up to 1000 ppm, up to 2000 ppm or up to 5000 ppm. However the range does not matter in practice, as it is possible to switch between them using UART.

The device interfaces are quite flexible for such a cheap device, as beside mentioned UART port it provides PWM and analog output. However, I was not able to measure valid value using analog and my cheap multimeter. Maybe some more sophisticated equipment is required for that.

I have to make one note here: device I bought is labeled as MH-Z14A and its range is 0-5000 ppm. Other variants might have different features. For mine, there is no UART protocol documentation. Yet, protocol documented under name MH-Z14 works, so be careful. Continue reading “mhz14a – program for managing MH-Z14/MH-Z14A CO2 sensors via UART”

SADVE – tiny program for computing #define values

While tinkering with spy camera, I found one detail that is significantly slowing the process of reverse engineering and debugging the applications, installed on its embedded Linux platform – finding final values of preprocessor directives and sometimes also results of sizeof() operator.

As I am not aware of any existing solution for that problem (I guess there might be some included in one of the more sophisticated IDEs, however I use Vim for development) it is good reason to create one. By the way I used cmake template I published some days ago to bootstrap the project.

Usage

Ease of use was the main goal here, as it is obviously possible to create improvised solution by creating hello-world type of program, including required headers and printing the symbol we want to compute value of.

So, to be able to use SADVE, you just have to clone the repo and use standard cmake installation commands and you’re done:

mkdir -p build
cd build
cmake ..
make
sudo make install

Then you can call it like below:

sadve -d AF_INET sys/socket.h

And you should get 2 as an answer. That’s it. If instead you want to get size of some structure, you can type:

sadve -s sockaddr sys/socket.h

And you should get size of sockaddr structure. Obviously, you can see full usage with sadve --help.

Internals

Internally the program simply automates the process I described in the first paragraph of Usage – it applies what is desired by user to hello-world-like template and compiles. Therefore it might not be the best idea to make it a backend for web service available for general public, at least without a lot of isolation and input sanitization. However for private usage this should be enough. If you are interested in doing such task with cmake, I encourage you to dive into source code on Github.

To speed the process up, I had to store all cmake build files in ~/.cache with no interface for cleaning it up.

USB to serial converter drivers for Android revisited

CP2102

Few years ago I compiled kernel drivers of cheap USB-to-serial converter for my previous Android phone. It took few years of using new phone, without single custom-compiled kernel module. Now it is time to change it. By the way, I am going to describe what changed and what hacks have to be made to make the process work on stock ROM, provided by Sony.

kernel is the key

First of all, we need kernel. To be precise, kernel sources. Without that, it is really hard to be successful (I don’t want to tell it is impossible, but really hard, believe me). Because Sony is very liberal in terms of cooperation with community, they provide anything required to tinker with the device (obviously together with caution message about warranty loss, but who cares, right? 🙂 ).

First of all, we need to know, which firmware version the device uses. To be found in Android settings, as compilation number, or something like that. For me, it is 23.5.A.0.575. Then, we have to visit Open Devices downloads section and find our firmware. For me, it was a lot of scrolling, as I have no updates available for quite some time. Inside the package, there should be kernel directory, with complete kernel sources.

Where is my .config?

Next thing we need to know is, which defconfig to use. Full list should be in arch/arm/configs. Now, in case of Sony phones, there is slight problem, as they traditionally use codenames for devices. In case of Xperia Pro, I compiled for before, it was iyokan. For Xperia Z3 Compact, I use now, it is Aries and the only official source of those codenames, I know, is their Github profile. Of course it would be too easy to find some mapping and searching for z3 gives no result. Fortunately, I know my device’s codename.

$ find . -name *aries*
./include/config/mach/sony/aries.h
./arch/arm/configs/shinano_aries_defconfig
./arch/arm/mach-msm/board-sony_aries-gpiomux.c
./arch/arm/mach-msm/board-sony_aries-gpiomux.o
./arch/arm/mach-msm/bms-batterydata-aries.o
./arch/arm/mach-msm/.board-sony_aries-gpiomux.o.cmd
./arch/arm/mach-msm/.bms-batterydata-aries.o.cmd
./arch/arm/mach-msm/bms-batterydata-aries.c
./arch/arm/boot/.msm8974pro-ac-shinano_aries.dtb.cmd
./arch/arm/boot/dts/msm8974pro-ac-shinano_aries_common.dtsi
./arch/arm/boot/dts/dsi-panel-aries.dtsi
./arch/arm/boot/dts/msm8974pro-ac-shinano_aries.dtsi
./arch/arm/boot/dts/msm8974pro-ac-shinano_aries.dts
./arch/arm/boot/msm8974pro-ac-shinano_aries.dtb

As we can see, there is only one config, related to aries: shinano_aries_defconfig (Sony’s Github profile explains that Shinano is platform name). Then, we can safely use this in defconfig phase.

Compilation (and hacking)

Once we have all the sources and kernel configuration, we can start compilation. Or actually, we cannot (probably).

Hacking

Let’s see vermagic of random module already installed on a device:

# modinfo zl10353.ko                            
filename:       zl10353.ko
license:        GPL
author:         Chris Pascoe
description:    Zarlink ZL10353 DVB-T demodulator driver
parm:           debug_regs:Turn on/off frontend register dumps (default:off).
parmtype:       debug_regs:int
parm:           debug:Turn on/off frontend debugging (default:off).
parmtype:       debug:int
depends:
intree:         Y
vermagic:       3.4.0-perf-g43ea728 SMP preempt mod_unload modversions ARMv7

We can see at least two things that will cause troubles:

  1. -perf-g<sha-1>
  2. modversions

In the first case, git commit id is appended to kernel version. Unfortunately, we do not have their repository and after module compilation, we will end up with just 3.4.0. To fix the problem, we have to edit makefile and set EXTRAVERSION to the missing part, so it should look like:

VERSION = 3
PATCHLEVEL = 4
SUBLEVEL = 0
EXTRAVERSION = -perf-g43ea728
NAME = Saber-toothed Squirrel

The second detail, forces us to compile whole kernel. Otherwise, Android kernel will try to check if the module is compatible with current kernel (using CRC checksums) and will fail on missing module_layout symbol CRC.

Bad hacking

In case of very simple drivers, there is a way to omit kernel compilation. However, it is not a safest way to go and serves as permanent --force for modprobe/insmod. I advice to skip this section, unless you are really desperate (and you are not, before trying the proper way).

Go back to some random driver, like the one, we used for vermagic check, pull it to PC and issue:

$ modprobe --dump-modversions zl10353.ko 
0x2067c442      module_layout
0x15692c87      param_ops_int
0xe6b3b90a      arm_delay_ops
0x59e5070d      __do_div64
0x5f754e5a      memset
0x0fc539b8      kmalloc_caches
0x9d669763      memcpy
0x52ac1d50      kmem_cache_alloc_trace
0x27e1a049      printk
0xfbc76af9      i2c_transfer
0x037a0cba      kfree
0xefd6cf06      __aeabi_unwind_cpp_pr0

From my experience, I know that module_layout is most troublesome. So why not add it to modversions of our module? Just run (change the CRC to match modprobe output!):

echo -e '0x2067c442\tmodule_layout\tvmlinux\tEXPORT_SYMBOL_GPL' >> Module.symvers

And you should cheat kernel to trust the symbol, even if in fact it would be different in kernel compiled by you. Then, after insmodding the module, built using the same shortcut as in my previous tutorial, you should possibly see a lot of errors on your dmesg. You can hunt for the symbols, from there and chances are it will work. Haven’t tested personally and I discourage, unless you really know what you are doing. It is wiser choice to wait those few minutes for kernel to compile.

Device specific hacking

During my compilation, I had to do some more hacks, as I had problems with missing headers. This will possibly be only relevant to the specific kernel version and device pair, but just in case, I am writing it down. You can safely skip to compilation and only go back in case of problems with framebuffer for MSM processors.

Following should fix the problem:

ln -s ../../drivers/video/msm/mdss/mdss_mdp_trace.h include/trace/mdss_mdp_trace.h
ln -s ../../drivers/video/msm/mdss/mdss_mdp.h include/trace/mdss_mdp.h

Compiling

Now, it should be fairly easy, though time consuming. Type following commands, one after another:

ARCH=arm CROSS_COMPILE=arm-unknown-eabi- make shinano_aries_defconfig
ARCH=arm CROSS_COMPILE=arm-unknown-eabi- make
ARCH=arm CROSS_COMPILE=arm-unknown-eabi- make modules M=drivers/usb/serial CONFIG_USB_SERIAL=m CONFIG_USB_SERIAL_CP210X=m

If there is no unexpected error during the compilation, you should now be able to insmod your fresh module into the kernel. In case of CP2102 driver, I compiled, there are in fact two drivers: usbserial and cp210x. cp210x depends on usbserial, so usbserial have to be inserted first. Afterwards, if you connect the device, you should see in dmesg, it succeeded, and in case of cp210x, there should be a name of USB tty device (most likely /dev/ttyUSB0, as there should be no USB ttys before).

Just to prove it, that it works, below are photos of of Xperia, running Termux and minicom to connect to Cubieboard2:

Sony Z3C - UART-USB plugged
Phone sniffing on Linux boot
Sony Z3C plugged to Cubieboard2 via UART
Phone connected to Cubieboard2

Setting up new v3 Hidden Service with ultimate security: Part 3: Client Authentication

Secure Card icon

This post is a part of Tor v3 tutorial. Other parts are:

  1. Hidden Service setup
  2. PKI and TLS
  3. Client Authentication
  4. Installing client certificates to Firefox for Android

As we now have working Public Key Infrastructure, we are ready to use it for more than encrypting traffic (which is already encrypted by Tor). We can very easily turn on client verification on our server. This will prevent anybody not having valid certificate issued by us from visiting our hidden webpage – just in case hiding domain name in hidden services version 3 leaks the name somehow (which should not happen anymore in v3). In this part we will issue client certificate (the procedure is almost identical to server certificate), then configure httpd to require client identification and finally configure Firefox to try sending the certificate. Let’s go!

Issuing user certificate

In my case tmp directory emulated client machine and ca is my Cerificate Authority, which issues certificates. We start by creating request on client side, then sign it on CA side.

$ mkdir tmp
$ cd tmp
$ openssl genrsa -out v3l0c1r4pt0r@gmail.com.key.pem 4096
Generating RSA private key, 4096 bit long modulus
........++
..............................................++
e is 65537 (0x010001)
$ openssl req -config ../ca/intermediate/openssl.cnf -key v3l0c1r4pt0r@gmail.com.key.pem -new -sha256 -out v3l0c1r4pt0r@gmail.com.csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:PL
State or Province Name [England]:lodzkie
Locality Name []:
Organization Name [Alice Ltd]:r4pt0r Test Systems
Organizational Unit Name []:
Common Name []:v3l0c1r4pt0r@gmail.com
Email Address []:v3l0c1r4pt0r@gmail.com
$ chmod 400 v3l0c1r4pt0r@gmail.com.*.pem
$ cp v3l0c1r4pt0r@gmail.com.csr.pem ../ca/intermediate/csr/
$ cd ../ca
$ openssl ca -config intermediate/openssl.cnf -extensions usr_cert -days 375 \
> -notext -md sha256 -in intermediate/csr/v3l0c1r4pt0r@gmail.com.csr.pem \
> -out intermediate/certs/v3l0c1r4pt0r@gmail.com.cert.pem
Using configuration from intermediate/openssl.cnf
Enter pass phrase for /home/r4pt0r/Research/cubie/newtor/ca/intermediate/private/intermediate.key.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 4097 (0x1001)
        Validity
            Not Before: Feb 27 17:14:40 2018 GMT
            Not After : Mar  9 17:14:40 2019 GMT
        Subject:
            countryName               = PL
            stateOrProvinceName       = lodzkie
            organizationName          = r4pt0r Test Systems
            commonName                = v3l0c1r4pt0r@gmail.com
            emailAddress              = v3l0c1r4pt0r@gmail.com
        X509v3 extensions:
            X509v3 Basic Constraints:
                CA:FALSE
            Netscape Cert Type:
                SSL Client, S/MIME
            Netscape Comment:
                OpenSSL Generated Client Certificate
            X509v3 Subject Key Identifier:
                ED:24:E6:FF:1D:9B:61:AC:29:66:39:59:FB:5D:77:25:F7:A3:55:47
            X509v3 Authority Key Identifier:
                keyid:3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70

            X509v3 Key Usage: critical
                Digital Signature, Non Repudiation, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Client Authentication, E-mail Protection
Certificate is to be certified until Mar  9 17:14:40 2019 GMT (375 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
$ cd ../tmp
$ cp ../ca/intermediate/certs/v3l0c1r4pt0r@gmail.com.cert.pem ./
$ openssl pkcs12 -export -inkey v3l0c1r4pt0r@gmail.com.key.pem -in v3l0c1r4pt0r@gmail.com.cert.pem -out v3l0c1r4pt0r@gmail.com.p12
Enter Export Password:
Verifying - Enter Export Password:

Last step was packaging certificate and key into PKCS#12 container. That is for securing key (we can encrypt it with password), and is a form required by Firefox. After creation of .p12 (and verifying it is fine), we can (and SHOULD) delete source files, as they are not protected in any way.

Configuring httpd to require user certificate

To enforce client verification, following lines must be added to virtual host configuration, in our case it might go just after SSL certificate file paths.

    SSLVerifyClient require
    SSLVerifyDepth 2

We have to reload httpd for changes to take effect.

Installing certificate to Firefox

At last, to start using newly generated certificate, we should install it to Firefox. The procedure is similar to the one with CA certificate. We need to open Certificate Manager window. Then, instead of going to Authorities, we go to Your Certificates. Then we click on Import and select .p12 file.

Firefox Certificate Manager
Certificate Manager / Your Certificates

If the file has password, Firefox will ask for it and after successfully reading the content. If everything went well, you should see your certificate on the list. Now we can try connecting to our hidden service. We should see the window like this:

Firefox - User Identification Request
Server asks for client’s identity

Finally, after confirmation, you should see your hidden service content. Congrats!

Setting up new v3 Hidden Service with ultimate security: Part 1: Hidden Service setup

This post is a part of Tor v3 tutorial. Other parts are:

  1. Hidden Service setup
  2. PKI and TLS
  3. Client Authentication
  4. Installing client certificates to Firefox for Android

As a student I was lucky to have unlimited private Git repositories on Github, since they introduced that to their first paid plan. Unfortunately, I don’t have access to educational e-mail anymore, so I won’t be able to renew the service. This leads to a need to have that feature migrated to somewhere else. Some time ago, I installed cgit and gitolite on my single board computer (SBC). But, because of Github, there was no need to use that. Now it seems like a good replacement to Github’s Developer plan.

Few weeks ago, there was interesting event – Tor Project introduced new version of their Hidden Services – v3, which changes length of hidden service address in .onion domain and disables “feature” enabling some nodes in the network to index all existing service addresses. This seems like a good moment to give it a try and check, how fast (or rather how slow) will be the solution providing git through Tor on few-year-old SBC. By the way, I will show, how to configure things with maximum security in mind.

Disclaimer: I am not a person with deep knowledge of inner workings of Tor network, so I strongly encourage you to read thing or two, about how to use it safely. This article might contain errors that might reveal your identity, especially when used together with not-self-owned hidden services.

Prerequisites

Let’s start with summary of what we will need to make Tor v3 work:

  • tor in version 0.3.2.9 or higher
  • alternatively Tor Browser 7.5 or higher
  • for Android: Orbot and Orfox (at the moment of writing this, there is no support in current version of Orbot, so custom compilation is required – I am using Termux to provide tor binary)
  • httpd or any other HTTP server, able to provide service with only one vhost on separate TCP port

Because of the way, I am planning to configure hidden service in future, it might be a good idea to set up separate Tor browser at this moment, dedicated to this service, if it is going to be production configuration. If this is just an experiment, this advice could safely be ignored. However it is good to know, how to undo any modifications to the browser that will be done in the next parts.

httpd

What we need to do is to listen on localhost, on some random TCP port. Then we will set up httpd to provide only one virtual host on this custom port. It would be perfect to disable any other vhosts as our hidden service will work also as non-hidden service for local users, so if other service is buggy and allows to connect to other local services (see e.g. DNS rebinding), at least address of our hidden service will be compromised.

I have following configuration:

Listen 666

<VirtualHost *:666>
    ServerAdmin [email]@[domain]
    DocumentRoot "[path]/public_html"
    ServerName [domain].onion
    ErrorLog "[path]/error_log"
    CustomLog "[path]/access_log" common
</VirtualHost>

<Directory "[path]/public_html">
    DirectoryIndex index.html index.php index.txt
    AllowOverride All
    Options FollowSymlinks
    Require all granted
</Directory>

Furthermore, httpd must be able to traverse to public_html directory, so every directory from public_html up to root must have execute privilege for http user and directory itself as well as its content must be available (or better owned) by http.

After that and after starting httpd, it should be possible to visit http://localhost:666 via web browser and see content of public_html directory. If this is true, we can move on to tor configuration.

tor

SocksPort auto

HiddenServiceDir /etc/tor/hsv3
HiddenServiceVersion 3
HiddenServicePort 80 127.0.0.1:666

SafeLogging 0
Log notice stdout
Log notice file /etc/tor/hsv3/hs.log
Log info file /etc/tor/hsv3/hsinfo.log

Now, on the first startup of tor,  it should create keys for our new hidden service. We can look into /etc/tor/hsv3/hostname to see the .onion address. It is good idea to set key files and hostname file as readable as only user running tor service. In case of service started by systemd, this will probably be tor by default.

After starting tor service (systemctl start tor in case of systemd), we can check if everything works properly by visiting our hidden service with tor-enabled browser (using tor 0.3.2.9 or higher). That’s it.

Firefox for Android

At the time of writing this article there is still no upgrade for Orbot app, providing GUI interface for tor. Because of that, it might be required to use ordinary Firefox to use tor as a proxy, which is generally bad idea for connecting to any hidden services, because of privacy and anonymity. Fortunately, we can live with revealing our identity to ourselves 🙂 so we can do it only this single time.

What we need to change are following configuration options, available under about:config page:

  • network.proxy.socks to localhost
  • network.proxy.socks_port to 9050
  • network.proxy.socks_remote_dns to true
  • network.proxy.socks_version to 5, if any other (should be default)
  • network.proxy.type to 1 (0 means no proxy, 5 is system proxy)

Conclusion

Now we are ready to use our hidden service, from both desktop and mobile. Still, we use only HTTP protocol, which is not a big problem, as tor already provides encryption. Neverheless our next goal would be to configure HTTPS. And then we will configure client authentication for ultimate security of our hidden service.

LKV373A: porting objdump

This article is part of series about reverse-engineering LKV373A HDMI extender. Other parts are available at:

After part number four, we already have ELF file, storing all the data we found in firmware image, described in a way that should make our analysis easier. Moreover, we have ability to define new symbols inside our ELF file. The next step is to add support for our custom architecture into objdump and this is what I want to show in this tutorial.

Finding best architecture to copy

If we want to set up new architecture in objdump code, we need to learn interfaces that need to be implemented. It would be easier if we can use some existing code to do so. After some looking into the binutils’ code I learned that what is of special interest are bfd and opcodes libraries. They contain code dedicated to particular architectures. The first one seem to be related to object file handling (which in our case is ELF), so we should not tinker with it too much. Second one is related to disassembling binary programs, so is what we are looking for.

I did some quick examination of source code related to popular architectures and it seems not to be easy to adjust to our needs. Architecture I found to be best suitable for modification is Microblaze. Its source seem to be quite well-written, clean and short. Also from my research of architecture name for LKV373A (part 2, failed by the way) I also remember it is quite similar to the one present in LKV373A, so it is even better decision to use it.

Compiling objdump for target architecture

At first it is useful to learn how to compile objdump, so it will be able to disassemble program written for our target. Microblaze is not really a mainstream architecture, so there aren’t many programs compiled for it available online after typing 'microblaze program elf' into usual search engine. However, I was able to find 2 of them, so I was able to verify that compilation worked. If you can’t find any, I uploaded these to MEGA, so they can serve as test cases. First one is minimal valid file, the other one is quite huge.

Compilation is very easy. The only thing that needs to be done beside usual ./configure && make && make install is adding target option to configure script. So, the script looks as follows:

./configure --target=microblaze-elf

Of course, install step can safely be skipped as well as compilation of other tools, beside objdump. objdump itself seem to be built using make binutils/objdump. However it can’t be build successfully using that shortcut, so whole binutils package must be configured the way, everything not buildable is excluded from the build.

Setting up own architecture

Next step is to add support for our brand new, custom architecture to binutils’ configuration files and copy microblaze sources, so they will simulate our architecture, until we will write our own implementation. Then it should be possible to test objdump again, against our sample microblaze programs and disassembly should still work.

Even without any modification to binutils’ source or configs, it should be possible to configure it for any random architecture. The only constraint is format of the target string: ARCH-OS-FORMAT, where FORMAT is most likely to be elf. So, if we pass lkv373a-unknown-elf as target, it will work. -unknown part is usually skipped and this will not work. If we need it to work, config.sub must be modified. config.sub is used to convert any string, passed to configure into canonical form, so in our case lkv373a-unknown-elf. If it detects, that it is already in canonical form, it does nothing.

Final configure command will be slightly more complex, as we have to disable some parts, that are not of our interest and requires additional effort to work:

./configure --target=lkv373a-unknown-elf --disable-gas --disable-ld --disable-gdb

Although passing something random as target option works on configure stage, it will obviously fail on make stage. What make is doing at first is configuring all the sublibraries. What is of our interest is bfd and opcodes. And the first one fails. So this is the first problem, we need to get rid of.

bfd/config.bfd

The purpose of this file is to set some environment variables depending on target architecture. If it does not know the architecture, it returns error to caller, which is probably bfd’s configure script, called by make. According to documentation in file header, it sets following variables:

  1. targ_defvec – default vector. This links target to list of objects that will provide support for ELF file built for specific architecture (stored in bfd/configure.ac)
  2. targ_selvecs – list of other selected vectors. Useful e.g. when we need support for both 32- and 64-bit ELFs. Not needed here.
  3. targ64_selvecs – 64-bit related stuff. Used when target can be both 32- and 64-bit, meaningless in our case.
  4. targ_archs – name of the symbol storing bfd_arch_info_type structure. It provides description of architecture to support.
  5. targ_cflags – looks like some hack to add extra CFLAGS to compiler. We don’t care.
  6. targ_underscore – not sure what it is, should have no impact on our goals (possible values are yes or no)

To sum up, what we need to do on this step is to define default vector, we will later add to configure.ac and set name of architecture description structure. The structure itself will be defined later. Finally, I ended up with the following patch:

@@ -173,6 +173,7 @@ hppa*)     targ_archs=bfd_hppa_arch ;;
 i[3-7]86)   targ_archs=bfd_i386_arch ;;
 i370)     targ_archs=bfd_i370_arch ;;
 ia16)     targ_archs=bfd_i386_arch ;;
+lkv373a)  targ_archs=bfd_lkv373a_arch ;;
 lm32)           targ_archs=bfd_lm32_arch ;;
 m6811*|m68hc11*) targ_archs="bfd_m68hc11_arch bfd_m68hc12_arch bfd_m9s12x_arch bfd_m9s12xg_arch" ;;
 m6812*|m68hc12*) targ_archs="bfd_m68hc12_arch bfd_m68hc11_arch bfd_m9s12x_arch bfd_m9s12xg_arch" ;;
@@ -924,6 +925,10 @@ case "${targ}" in
     targ_defvec=iq2000_elf32_vec
     ;;

+  lkv373a*-*)
+    targ_defvec=lkv373a_elf32_vec
+    ;;
+
   lm32-*-elf | lm32-*-rtems*)
     targ_defvec=lm32_elf32_vec
     targ_selvecs=lm32_elf32_fdpic_vec

bfd/configure.ac

Now we need to define vector, we just declared to use for lkv373a architecture.

505     k1om_elf64_fbsd_vec)         tb="$tb elf64-x86-64.lo elfxx-x86.lo elf-ifunc.lo elf-nacl.lo elf64.lo $elf"; target_size=64 ;;
506     lkv373a_elf32_vec)           tb="$tb elf32-lkv373a.lo elf32.lo $elf" ;;
507     l1om_elf64_vec)              tb="$tb elf64-x86-64.lo elfxx-x86.lo elf-ifunc.lo elf-nacl.lo elf64.lo $elf"; target_size=64 ;;

Unfortunately, as we did modifications to .ac script, we now need to rebuild our configure. From my experience, any tinkering with autohell, after solving one problem, creates 5 more. We need to get into bfd directory and reconfigure project:

cd bfd
autoreconf

Now, if it worked for you, you should definitely go, play some lottery 🙂 . For me it said that I need exactly same version of autoconf as used by binutils’ developers. Because autoconf is so great, probably what I will show now is completely useless for anyone, but hacks I needed to do are at first to add:

20 m4_define([_GCC_AUTOCONF_VERSION], [2.69])

to the beginning of configure.ac file. Then bfd/doc/Makefile.am contains removed cygnus option at the beginning, in AUTOMAKE_OPTIONS, so we need to remove it. After that doing automake --add-missing, as autoreconf suggests, and then again autoreconf should solve the problem. But, as I said, this will probably not work for you. I can only wish you good luck.

(if were following the steps, you might have noticed that autoconf complained about not being in version 2.64 and we overridden version from 2.69 to 2.69 and it worked 🙂 , don’t ask me, why, please!)

After this step, compilation should start (and obviously will fail miserably on bfd as it misses few symbols). Now its time to make bfd compilable.

bfd/elf32-lkv373a.c

This file is meant to provide support for custom features of ELF file. As we don’t have any, we can safely do nothing here. Good template of such file is elf32-m88k.c as it does exactly this.

One thing that seem to be important here is EM value of architecture described. EM is an enum used in ELF file to define target architecture, so it might be required to adjust in our new elf32-lkv373a.c file. By the way definition of this value have to be added to include/elf/common.h:

433 /* LKV373A architecture */
434 #define EM_LKV373A              0x373a

It might also be a good idea to add it to elfcpp/elfcpp.h. To make the file compile, it is necessary to add following to bfd/bfd-in2.h as value of bfd_architecture enum:

2398   bfd_arch_lkv373a,    /* LKV373A */

bfd/archures.c

As we declared bfd_lkv373a_arch as symbol with CPU description structure, we now need to add this declaration to archures.c, as this is the file, where it will be used. We have to add:

611 extern const bfd_arch_info_type bfd_l1om_arch;
612 extern const bfd_arch_info_type bfd_lkv373a_arch;
613 extern const bfd_arch_info_type bfd_lm32_arch;

bfd/targets.c

Similar situation is in targets.c file. Here we have to provide declaration of our vector as bfd_target. This will be another structure, which seem to be generated automatically, so we should not care about it.

704 extern const bfd_target l1om_elf64_fbsd_vec;
705 extern const bfd_target lkv373a_elf32_vec;
706 extern const bfd_target lm32_elf32_vec;

bfd/cpu-lkv373a.c

This last file, we need in bfd, provides bfd_arch_info_type structure and… that’s it! Can be easily borrowed from cpu-microblaze.c with only slight modifications. One thing that needs explanation here is section_align_power. As far as I understand it, it is power of two to which the beginning of the section in memory must be aligned. It should be safe to put 0 here, as we are not going to load our ELF into memory.

This should close the bfd part of initialization. As you can see, there was no development at all to be done here. Let’s now go to opcodes library.

opcodes/configure.ac

At first we need to define objects to build for LKV373A architecture in opcodes library. This is quite similar to what we had to do in configure.ac of bfd library.

282         bfd_iq2000_arch)        ta="$ta iq2000-asm.lo iq2000-desc.lo iq2000-dis.lo iq2000-ibld.lo iq2000-opc.lo" using_cgen=yes ;;
283         bfd_lkv373a_arch)       ta="$ta lkv373a-dis.lo" ;;
284         bfd_lm32_arch)          ta="$ta lm32-asm.lo lm32-desc.lo lm32-dis.lo lm32-ibld.lo lm32-opc.lo lm32-opinst.lo" using_cgen=yes ;;

Hopefully, -dis file will be enough to be implemented. I’ve made a copy from microblaze configuration. The same way we will copy whole source file and any related headers in the next step.

Now, similarly to bfd’s configure.ac, we have to reconfigure it. And again, nobody knows what errors we will encounter.

opcodes/disassemble.c

The only thing that have to be done here is to set pointer of disassemble function. For this following snippets should be added:

53 #define ARCH_lkv373a
255 #ifdef ARCH_lkv373a
256     case bfd_arch_lkv373a:
257       disassemble = print_insn_lkv373a;
258       break;
259 #endif

And to disassemble.h:

62 extern int print_insn_lkv373a           (bfd_vma, disassemble_info *);

opcodes/lkv373a-dis.c

This is, where real stuff will happen. As our goal, for now, is not to make implementation of LKV373A architecture, but rather set everything up, so objdump will build, we can copy source file from microblaze-dis.c. It is also required to copy headers, related to MicroBlaze, used by this file, so:

  • opcodes/microblaze-dis.h
  • opcodes/microblaze-opc.h
  • opcodes/microblaze-opcm.h

And change include directives in them to link to lkv373a file, rather than microblaze ones.

Now, optionally we could change names of any symbols referring to name microblaze, but this should not be required, as original microblaze files should not be included in the build. The only change than need to be done is print_insn_microblaze into print_insn_lkv373a, as this is what we added to disassemble.c.

You should now be able to compile working objdump with LKV373A support (of course with wrong implementation, for now). We can now verify that everything works on slightly modified ELF file for MicroBlaze architecture (EM field must point to LKV373A – value must be 0x373a). Well done!

NOTE: all the steps, done till now are available on tutorial-setup tag in repository on Github.

Functions to implement

Now, finally the real fun starts. Bindings between opcodes library and objdump itself, require at least print_insn_lkv373a to be implemented.

What should happen inside this function is quite simple and can be described in following steps:

  1. Gets bfd_vma and struct disassemble_info (called info below) as parameters
  2. Read raw data containing instructions using info->read_memory_func
  3. Call info->memory_error_func in case of any errors
  4. Use info->fprintf_func to print disassembled instruction into info->stream
  5. Optionally use info->symbol_at_address_func to determine if there is any symbol declared at address decoded from instructions
  6. If symbol exists, call info->print_address_func
  7. Return number of bytes consumed

Following is some documentation, I wrote for easier implementation (mostly translated inline comments), of functions to be called:

  /**
   * \brief Function used to get bytes to disassemble
   *
   * \param memaddr Address of the current instruction
   * \param myaddr Buffer, where the bytes will be stored
   * \param length Number of bytes to read
   * \param dinfo Pointer to info structure
   *
   * \return errno value or 0 for success
   */
  int (*read_memory_func)
    (bfd_vma memaddr, bfd_byte *myaddr, unsigned int length,
     struct disassemble_info *dinfo);
  /**
   * \brief Call if unrecoverable error occurred
   *
   * \param status errno from read_memory_func
   * \param memaddr Address of current instruction
   * \param dinfo Pointer to info structure
   */
  void (*memory_error_func)
    (int status, bfd_vma memaddr, struct disassemble_info *dinfo);
  /**
   * \brief Pointer to fprintf
   *
   * \param stream Pass info->stream here
   * \param char Format string
   * \param ... vargs
   *
   * \return Number of characters printed
   */
  typedef int (*fprintf_ftype) (void *, const char*, ...) ATTRIBUTE_FPTR_PRINTF_2;
  /**
   * \brief Determines if there is a symbol at the given ADDR
   *
   * \param addr Address to check
   * \param dinfo Pointer to info structure
   *
   * \return If there is returns 1, otherwise returns 0
   * \retval 1 If there is any symbol at ADDR
   * \retval 0 If there is no symbol at ADDR
   */
  int (* symbol_at_address_func)
    (bfd_vma addr, struct disassemble_info *dinfo);
  /**
   * \brief Print symbol name at ADDR
   *
   * \param addr Address at which symbol exists
   * \param dinfo Pointer to info structure
   */
  /* Function called to print ADDR.  */
  void (*print_address_func)
    (bfd_vma addr, struct disassemble_info *dinfo);

For easier start of development, this commit can be used as template. You can find effects of implementation according to this description on lkv373a branch of my binutils fork on Github. After this step, you should have working objdump, able to disassemble architecture of your choice.

Alternative way

According to binutils’ documentation, porting to new architectures should be done using different approach. Instead of copying sources from other architectures, developers should write CPU description files (cpu/ directory) and then use CGEN to generate all necessary files. However, I found these files way too complicated comparing to goal, I wanted to achieve, therefore I used the shortcut. In reality, however, this might be a better way, as the final result should be the support for new architecture not only in objdump, but also in e.g. GAS (GNU assembler). If you want to go that way, another useful resource might be description of CPU description language.

Plans for the future

As I am now able to speed up reverse engineering of both instruction set and LKV373A firmware, I am planning to create public repository of my progress and guess operations done by some more opcodes as I already know only few of them. So, I will probably push some more commits to binutils repo as well. I hope this will enable me to gain some more knowledge about LKV373A and allow, me or someone else, to reverse engineer second part of the firmware, which seem to be way more interesting that the one, I was reverse engineering till now.

[Import]Wget with SSL/TLS support for Android

NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 12th September 2016.

wget dependency tree

Lately I have tried to download some file from a website to my Android smartphone. Simple thing, yeah? Well, not really. Unfortunately mobile browser developers removed many features from their mobile distributions. One of them is a possibility of downloading random page to disk as is. Instead (this is the case at least with Mozilla’s product) they are forcing “Download as PDF” feature. I had a bit of luck, because the file I was trying to download was MP4 movie, which is downloadable, maybe not in an intuitive way, but it is. But before I have found that feature hidden in a player’s context menu, I tried another solution – wget. Since I am great fan of terminals, I have busybox installed on my phone. Those of you, who know what exactly is busybox should know that this is set of lightweight variants of most standard UNIX tools. So, if they are lightweight, they had to cut some part of tool functionality, right? And in case of my busybox’s wget, they cut HTTPS support. And today, it is more likely to encounter site which is only HTTPS than one that is only HTTP, at least when talking about popular sites. So I had to get my own distribution of wget, that will not be such constrained one.

Not to get you bored too much, here you can find binary distribution of what I achieved to compile. It was compiled for ARMv7 platform using NDKr12b and API level 24 (Nougat), so it will probably not work on most of current Android phones, but if you read later, it is probably working on your device or even is outdated. If you are interested in recompiling binaries yourself, you can find detailed how-to in the next part of this article.

Dependencies

Before compiling wget itself, you have to have whole bunch of its dependencies. But at first, you of course need Android compiler. It is distributed as part of NDK and I won’t describe its installation here. Sources of every program compiled here can be grabbed from its official sites (list at the end of this post). The only exception is libtasn1, which required few hacks to be done to make it compile with Android bionic libc. Its source, ported to Android can be get from my github repo.

Let’s start with programs that does not depend on anything. For all projects, the procedure is more or less the same and can be described with simplified bash script:

tar -zxvf program-1.00.tar.gz
mkdir build
mkdir install
cd build
CC=arm-linux-androideabi-gcc AR=arm-linux-androideabi-ar RANLIB=arm-linux-androideabi-ranlib CFLAGS=-pie \
    ../program-1.00/configure --host=arm-linux --prefix=/data/local/root
make
make install DESTDIR=$(dirname `pwd`)/install/
cd ../install
tar -zcvf program.tar.gz *

gmp, libidn and libffi

For these three programs, the procedure above should work without any modification.

nettle

Since nettle depends on gmp, it has to be configured with paths to gmp binaries and headers in its CFLAGS and LDFLAGS variables. They should look like this:

CFLAGS="-pie -I`pwd`/../../gmp/install/data/local/root/include"
LDFLAGS="-L`pwd`/../../gmp/install/data/local/root/lib"

when invoking configure script.

libtasn1

This was the hardest part for me, but should go smoothly now. Script below should do the work correctly:

git clone git@github.com:v3l0c1r4pt0r/android_external_libtasn1.git
mkdir build
mkdir install
cd build
CC=arm-linux-androideabi-gcc AR=arm-linux-androideabi-ar RANLIB=arm-linux-androideabi-ranlib CFLAGS=-pie \
    ../libtasn1/configure --host=arm-linux --prefix=/data/local/root --disable-doc
make
make install DESTDIR=$(dirname `pwd`)/install/
cd ../install
tar -zcvf libtasn1.tar.gz

p11-kit

This is the last dependency of gnutls which is the only, but very important dependency of wget. Just embedding libtasn1 and libffi should do the job well.

CFLAGS="-pie -I`pwd`/../../libtasn1/install/data/local/root/include"
LDFLAGS="-L`pwd`/../../libtasn1/install/data/local/root/lib -L`pwd`/../../libffi/install/data/local/root/lib"

Notice that libffi has no headers, so we add it just to CFLAGS here!

gnutls

This one was more complicated than the rest. As I mentioned above, it is very important to wget functionality. However wget’s dependency on it could probably be turned off, we would not have TLS support then. When compiling it I had some problems that seemed to be serious. There were a few errors while making it, so I had to call make twice and even though it failed. Despite that it seem to work after make install, which obviously failed too. In my case following script did the job:

mkdir build
mkdir install
cd build
CC=arm-linux-androideabi-gcc AR=arm-linux-androideabi-ar RANLIB=arm-linux-androideabi-ranlib \
    CFLAGS="-pie -I`pwd`/../../gmp/install/data/local/root/include -I`pwd`/../../nettle/install/data/local/root/include -I`pwd`/../../libtasn1/install/data/local/root/include -I`pwd`/../../libidn/install/data/local/root/include -I`pwd`/../../p11-kit/install/data/local/root/include" \
    LDFLAGS="-L`pwd`/../../gmp/install/data/local/root/lib -L`pwd`/../../nettle/install/data/local/root/lib -L`pwd`/../../libtasn1/install/data/local/root/lib -L`pwd`/../../libidn/install/data/local/root/lib -L`pwd`/../../p11-kit/install/data/local/root/lib" \
    ../gnutls-3.4.9/configure --host=arm-linux --prefix=/data/local/root --disable-cxx --disable-tools
make || make
make install DESTDIR=$(dirname `pwd`)/install/ || true
cd ../install
tar -zcvf file.tar.gz *

Compilation

Since we should now have all dependencies compiled, we can try compiling wget itself. The procedure here is the same as with dependencies. We just have to pass path to gnutls. And then standard configure, make, make install should work. However if your NDK installation is fairly new and you were not hacking it before, you most likely don’t have <sys/fcntl.h> header and make should complain about that. Luckily Android itself have this header present, but for reason unknown it is kept in include directory directly. To make wget, and any other program that uses it, compile you can just point “sys/” instance to <fcntl.h> with symlink or do something like that:

echo "#include <fcntl.h>" > $TOOLCHAIN/sysroot/usr/include/sys/fcntl.h

where $TOOLCHAIN/sysroot is path at which you have your headers placed. Depending on tutorial you were using for making it work it may have different structure.

Installation

All commands I presented above implies that you have your custom-compiled binaries in “/data/local/root”. I made it that way to have clear separation between default and busybox binaries. If you want to have them somewhere else, you should pass it to configure scripts of all programs you are compiling. After successful compilation of all tools, I have made single tarball containing all compilation output (this file’s link was placed above). Its content can be installed into Android by typing

tar -zxvf wget-with-deps.tar.gz -C/

using adb shell or terminal emulator.

Sources

Below you can find links to sources of all programs nedded to follow this tutorial.

[Import]HDCB – new way of analysing binary files under Linux

NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 10th February 2016.

As any observer of my projects spotted, most of the biggest projects I do involves binary file analysis. Currently I am working on another one that requires such analysis.
Unfortunately such analysis is not an easy task and anything that will ease this or speed it up is appreciated. Of course there are some tools that will make it a bit easier. One of them is hexdump. Even IDA Pro can make it easier a bit. Despite them I always felt that something is missing here. When creating xSDM and delz utils, I was using hexdump output with LibreOffice document to mark different structure members with different colors. It worked, but selecting 100-byte buffer line by line was just wasting precious time.

Solution

SDC file analyzed by HDCB script

So why not automate whole process? What we really need here is just hexdump output and terminal emulator with color support. And that’s why I’ve made HDCB – HexDump Coloring Book. Basically it is just extension to bash scripting language. Goal was to create simple script that will hide as much of its internals from end-user and allow user to just start it will his shell using old good ./scriptname.ext and that’s it. HDCB is masked as if it was standalone scripting language. It uses shebang, known from bash or python scripts to let user shell know what interpreter to use (#!/usr/bin/env hdcb). Those who are python programmers should recognize usage of env binary.

In fact it is just simple extension to bash language, so users are still able to utilize any features available in bash. Main extensions are two commands: one (define) for defining variables and the other (use) for defining field or array of that defined type. Such scripts should be started with just one argument – file that is meant to be hexdumped and analyzed.

Internals

Bash scripts are just some kind of a cover of real program. Main core of the program is colour utility. It gets unlimited number of parameters grouped in groups of four. They are in order: offset of byte being colored, length of the field, background and foreground colors. As standard input, hexdump output (in fact only hexdump -C or hexdump -Cv are supported) is provided. Program colors the hexdump with rules provided as arguments. This architecture allows clever hacker to build that cover mentioned in virtually any programming language.

Downloads, documentation and more

As usual, program is available on my Github profile. Sources are provided on GPLv3 license so you are free to contribute to the project and you are strongly encouraged to do so or make proposals of a new functions. Program is meant to be expanded according to my future needs, but I will try to implement any good idea. Whole documentation, installation instructions and the like are also available on Github.

[Import]Airlive WN-151ARM UART pinout & root access

NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 24th November 2015.

Airlive WN-151ARM pinout

For curious ones. Here is pinout of serial connection. As you can see UART pins are at J4 header (should have pin 4 labeled and 1 be square).

J4 header
Num. Function
1 VCC
2 RX
3 TX
4 GND

Edit: Oh, and one more thing: goldpin header, you see in the picture is soldered by me, so do not be surprised if you have to hold wires all the time during the transmission.

Root access

There is also possibility to gain root access without removing the cover and possibly voiding the warranty. You have to connect to router’s AP and enter

http://192.168.1.254/system_command.htm

into your browser (panel authentication required). Now you can execute any command you want with root privileges! So let’s type

/usr/sbin/utelnetd -d &

into Console command field and press Execute button. If everything went well, you should now be able to connect to your router using telnet at its default TCP port 23. After that you should see BusyBox banner and command prompt.

It is worth noting that this hidden console cannot be accessed by unauthorized person, so only router administrator can use this (in theory, in practice there are surely a lot of routers using default credentials and security of httpd binary is unknown).