As we now have working Public Key Infrastructure, we are ready to use it for more than encrypting traffic (which is already encrypted by Tor). We can very easily turn on client verification on our server. This will prevent anybody not having valid certificate issued by us from visiting our hidden webpage – just in case hiding domain name in hidden services version 3 leaks the name somehow (which should not happen anymore in v3). In this part we will issue client certificate (the procedure is almost identical to server certificate), then configure httpd to require client identification and finally configure Firefox to try sending the certificate. Let’s go!
Issuing user certificate
In my case tmp directory emulated client machine and ca is my Cerificate Authority, which issues certificates. We start by creating request on client side, then sign it on CA side.
$mkdir tmp$cd tmp$openssl genrsa -out v3l0c1r4pt0r@gmail.com.key.pem 4096
Generating RSA private key, 4096 bit long modulus
........++
..............................................++
e is 65537 (0x010001)
$openssl req -config ../ca/intermediate/openssl.cnf -key v3l0c1r4pt0r@gmail.com.key.pem -new -sha256 -out v3l0c1r4pt0r@gmail.com.csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:PL
State or Province Name [England]:lodzkie
Locality Name []:
Organization Name [Alice Ltd]:r4pt0r Test Systems
Organizational Unit Name []:
Common Name []:v3l0c1r4pt0r@gmail.com
Email Address []:v3l0c1r4pt0r@gmail.com$chmod 400 v3l0c1r4pt0r@gmail.com.*.pem$cp v3l0c1r4pt0r@gmail.com.csr.pem ../ca/intermediate/csr/$cd ../ca$openssl ca -config intermediate/openssl.cnf -extensions usr_cert -days 375 \>-notext -md sha256 -in intermediate/csr/v3l0c1r4pt0r@gmail.com.csr.pem \>-out intermediate/certs/v3l0c1r4pt0r@gmail.com.cert.pem
Using configuration from intermediate/openssl.cnf
Enter pass phrase for /home/r4pt0r/Research/cubie/newtor/ca/intermediate/private/intermediate.key.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
Serial Number: 4097 (0x1001)
Validity
Not Before: Feb 27 17:14:40 2018 GMT
Not After : Mar 9 17:14:40 2019 GMT
Subject:
countryName = PL
stateOrProvinceName = lodzkie
organizationName = r4pt0r Test Systems
commonName = v3l0c1r4pt0r@gmail.com
emailAddress = v3l0c1r4pt0r@gmail.com
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
Netscape Cert Type:
SSL Client, S/MIME
Netscape Comment:
OpenSSL Generated Client Certificate
X509v3 Subject Key Identifier:
ED:24:E6:FF:1D:9B:61:AC:29:66:39:59:FB:5D:77:25:F7:A3:55:47
X509v3 Authority Key Identifier:
keyid:3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70
X509v3 Key Usage: critical
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Client Authentication, E-mail Protection
Certificate is to be certified until Mar 9 17:14:40 2019 GMT (375 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
$cd ../tmp$cp ../ca/intermediate/certs/v3l0c1r4pt0r@gmail.com.cert.pem ./$openssl pkcs12 -export -inkey v3l0c1r4pt0r@gmail.com.key.pem -in v3l0c1r4pt0r@gmail.com.cert.pem -out v3l0c1r4pt0r@gmail.com.p12
Enter Export Password:
Verifying - Enter Export Password:
Last step was packaging certificate and key into PKCS#12 container. That is for securing key (we can encrypt it with password), and is a form required by Firefox. After creation of .p12 (and verifying it is fine), we can (and SHOULD) delete source files, as they are not protected in any way.
Configuring httpd to require user certificate
To enforce client verification, following lines must be added to virtual host configuration, in our case it might go just after SSL certificate file paths.
SSLVerifyClientrequireSSLVerifyDepth 2
We have to reload httpd for changes to take effect.
Installing certificate to Firefox
At last, to start using newly generated certificate, we should install it to Firefox. The procedure is similar to the one with CA certificate. We need to open Certificate Manager window. Then, instead of going to Authorities, we go to Your Certificates. Then we click on Import and select .p12 file.
If the file has password, Firefox will ask for it and after successfully reading the content. If everything went well, you should see your certificate on the list. Now we can try connecting to our hidden service. We should see the window like this:
Finally, after confirmation, you should see your hidden service content. Congrats!
After setting up working Tor hidden service, the next step to ultimate security is having properly implemented Public Key Infrastructure (PKI). For this step, there are a lot of tutorials already existing and there is not much that needs to be added to them. Personally, I was using tutorial available here for the second time now and I find it very well-written. Because I am going to follow this tutorial, I will just post commands that have to be executed.
Before starting, I have to add one important remark. To make our PKI really secure one, it is crucial to have root CA air-gapped, that is device, on which it will be generated should be disconnected permanently from the internet. Good candidate for such a device might be some old laptop or Raspberry Pi Zero, as it lacks Ethernet port and anything reasonable to connect to internet. It is also important to store generated certificate in a safe place and secure it with strong non-dictionary password, which will be saved only in our mind.
If the requirements are fulfilled, we can start the setup. Below are commands to type as well as output from them, for easier determination of whether the commands were successful or not.
Preparations
At first, we need to create following directory structure:
Then, we need to save this file into root/openssl.cnf and this file into root/intermediate/openssl.cnf. Inside them, the only thing that have to be changed is dir property in CA_default section. Use absolute path to your directory.
Root CA
Note: when giving values for certain fields, better give some country, state (I have just checked it’s necessary), ON, most importantly, Common Name and e-mail. Just in case some program will check if they exists.
$openssl genrsa -aes256 -out private/ca.key.pem 8192
Generating RSA private key, 8192 bit long modulus
.................++
....++
e is 65537 (0x010001)
Enter pass phrase for private/ca.key.pem:
Verifying - Enter pass phrase for private/ca.key.pem:
$chmod 400 private/ca.key.pem$openssl req -config openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 \>-sha256 -extensions v3_ca -out certs/ca.cert.pem
Enter pass phrase for private/ca.key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:PL
State or Province Name [England]:lodzkie
Locality Name []:
Organization Name [Alice Ltd]:r4pt0r Test Systems
Organizational Unit Name []:
Common Name []:r4pt0r Root CA
Email Address []:admin@example.com$chmod 444 certs/ca.cert.pem$openssl x509 -noout -text -in certs/ca.cert.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
9a:16:72:e8:ac:81:cd:be
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Root CA, emailAddress = admin@example.com
Validity
Not Before: Feb 20 17:22:27 2018 GMT
Not After : Feb 15 17:22:27 2038 GMT
Subject: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Root CA, emailAddress = admin@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (8192 bit)
Modulus:
00:dd:8c:8f:5d:be:f4:0f:63:91:9c:73:bf:a8:17:
<quite a lot of data>
6d:c1:3f:5c:05
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
29:53:8A:D2:ED:CF:35:C2:BB:A8:12:06:01:74:99:A3:B8:E5:DC:FE
X509v3 Authority Key Identifier:
keyid:29:53:8A:D2:ED:CF:35:C2:BB:A8:12:06:01:74:99:A3:B8:E5:DC:FE
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Key Usage: critical
Digital Signature, Certificate Sign, CRL Sign
Signature Algorithm: sha256WithRSAEncryption
a9:6d:9e:d4:bf:1b:55:d8:f0:b5:e9:9d:56:e8:58:04:d6:c3:
<quite a lot of data>
89:50:26:4f:3e:93:95:06:c7:38:08:c7:16:0e:d2:a2
Intermediate CA
$openssl genrsa -aes256 -out intermediate/private/intermediate.key.pem 8192
Generating RSA private key, 8192 bit long modulus
.++
........................................................................................................................................................................................................................................................................................++
e is 65537 (0x010001)
Enter pass phrase for intermediate/private/intermediate.key.pem:
Verifying - Enter pass phrase for intermediate/private/intermediate.key.pem:
$chmod 400 intermediate/private/intermediate.key.pem$openssl req -config intermediate/openssl.cnf -new -sha256 \>-key intermediate/private/intermediate.key.pem -out intermediate/csr/intermediate.csr.pem
Enter pass phrase for intermediate/private/intermediate.key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:PL
State or Province Name [England]:lodzkie
Locality Name []:
Organization Name [Alice Ltd]:r4pt0r Test Systems
Organizational Unit Name []:
Common Name []:r4pt0r Intermediate CA
Email Address []:admin@example.com$openssl ca -config openssl.cnf -extensions v3_intermediate_ca -days 3650 \>-notext -md sha256 -in intermediate/csr/intermediate.csr.pem -out intermediate/certs/intermediate.cert.pem
Using configuration from openssl.cnf
Enter pass phrase for ca/private/ca.key.pem:
Can't open ca/index.txt.attr for reading, No such file or directory
140341269315520:error:02001002:system library:fopen:No such file or directory:crypto/bio/bss_file.c:74:fopen('ca/index.txt.attr','r')
140341269315520:error:2006D080:BIO routines:BIO_new_file:no such file:crypto/bio/bss_file.c:81:
Check that the request matches the signature
Signature ok
Certificate Details:
Serial Number: 4096 (0x1000)
Validity
Not Before: Feb 20 17:35:09 2018 GMT
Not After : Feb 18 17:35:09 2028 GMT
Subject:
countryName = PL
stateOrProvinceName = lodzkie
organizationName = r4pt0r Test Systems
commonName = r4pt0r Intermediate CA
emailAddress = admin@example.com
X509v3 extensions:
X509v3 Subject Key Identifier:
3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70
X509v3 Authority Key Identifier:
keyid:29:53:8A:D2:ED:CF:35:C2:BB:A8:12:06:01:74:99:A3:B8:E5:DC:FE
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
X509v3 Key Usage: critical
Digital Signature, Certificate Sign, CRL Sign
Certificate is to be certified until Feb 18 17:35:09 2028 GMT (3650 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
$openssl x509 -noout -text -in intermediate/certs/intermediate.cert.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4096 (0x1000)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Root CA, emailAddress = admin@example.com
Validity
Not Before: Feb 20 17:35:09 2018 GMT
Not After : Feb 18 17:35:09 2028 GMT
Subject: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Intermediate CA, emailAddress = admin@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (8192 bit)
Modulus:
00:d4:c9:03:36:4a:dd:3d:ee:ca:bd:c1:d8:fe:51:
<quite a lot of data>
5a:ca:74:74:c8:a2:b2:69:0a:0c:c7:f9:d6:8a:58:
41:45:73:fc:2b
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70
X509v3 Authority Key Identifier:
keyid:29:53:8A:D2:ED:CF:35:C2:BB:A8:12:06:01:74:99:A3:B8:E5:DC:FE
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
X509v3 Key Usage: critical
Digital Signature, Certificate Sign, CRL Sign
Signature Algorithm: sha256WithRSAEncryption
15:04:2f:85:89:f6:77:82:c4:60:78:f0:4f:ac:39:ad:15:14:
<quite a lot of data>
7c:71:95:db:16:02:de:01:70:fe:8f:48:94:92:11:1b
$openssl verify -CAfile certs/ca.cert.pem intermediate/certs/intermediate.cert.pem
intermediate/certs/intermediate.cert.pem: OK
$cat intermediate/certs/intermediate.cert.pem certs/ca.cert.pem > intermediate/certs/ca-chain.cert.pem$chmod 444 intermediate/certs/ca-chain.cert.pem
Server certificate
In the following parts, wherever [domain] appears, it should be changed to hostname of our hidden service.
At first, we need to generate certificate request (CSR) on our server:
$openssl genrsa -out [domain].onion.key.pem 4096
Generating RSA private key, 4096 bit long modulus
.................++
..............................................................................++
e is 65537 (0x010001)
$chmod 400 [domain].onion.key.pem$openssl req -config ca/intermediate/openssl.cnf \>-key [domain].onion.key.pem -new -sha256 -out [domain].onion.csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:PL
State or Province Name [England]:lodzkie
Locality Name []:
Organization Name [Alice Ltd]:r4pt0r Test Systems
Organizational Unit Name []:
Common Name []:[domain].onion
Email Address []:admin@[domain].onion
Then, we will sign the request with intermediate CA private key, thus issuing the certificate. But first of all, we need to receive the CSR from the server, to intermediate/csr/ directory.
$openssl ca -config intermediate/openssl.cnf -extensions server_cert -days 375 \>-notext -md sha256 -in intermediate/csr/[domain].onion.csr.pem -out intermediate/certs/[domain].onion.cert.pem
Using configuration from intermediate/openssl.cnf
Enter pass phrase for ca/intermediate/private/intermediate.key.pem:
Can't open ca/intermediate/index.txt.attr for reading, No such file or directory
139810167087040:error:02001002:system library:fopen:No such file or directory:crypto/bio/bss_file.c:74:fopen('ca/intermediate/index.txt.attr','r')
139810167087040:error:2006D080:BIO routines:BIO_new_file:no such file:crypto/bio/bss_file.c:81:
Check that the request matches the signature
Signature ok
Certificate Details:
Serial Number: 4096 (0x1000)
Validity
Not Before: Feb 20 17:52:13 2018 GMT
Not After : Mar 2 17:52:13 2019 GMT
Subject:
countryName = PL
stateOrProvinceName = lodzkie
organizationName = r4pt0r Test Systems
commonName = [domain].onion
emailAddress = admin@[domain].onion
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
Netscape Cert Type:
SSL Server
Netscape Comment:
OpenSSL Generated Server Certificate
X509v3 Subject Key Identifier:
DD:6E:E8:78:91:B9:F7:F4:0A:06:3F:D2:38:6D:11:4E:3C:D3:BC:E0
X509v3 Authority Key Identifier:
keyid:3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70
DirName:/C=PL/ST=lodzkie/O=r4pt0r Test Systems/CN=r4pt0r Root CA/emailAddress=admin@example.com
serial:10:00
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
Certificate is to be certified until Mar 2 17:52:13 2019 GMT (375 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
$ openssl x509 -noout -text -in intermediate/certs/[domain].onion.cert.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4096 (0x1000)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Intermediate CA, emailAddress = admin@example.com
Validity
Not Before: Feb 20 17:52:13 2018 GMT
Not After : Mar 2 17:52:13 2019 GMT
Subject: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = [domain].onion, emailAddress = admin@[domain].onion
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (4096 bit)
Modulus:
00:c5:d3:e2:a0:97:b8:4d:67:22:94:c9:be:17:e3:
<quite a lof of data>
49:76:cf
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
Netscape Cert Type:
SSL Server
Netscape Comment:
OpenSSL Generated Server Certificate
X509v3 Subject Key Identifier:
DD:6E:E8:78:91:B9:F7:F4:0A:06:3F:D2:38:6D:11:4E:3C:D3:BC:E0
X509v3 Authority Key Identifier:
keyid:3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70
DirName:/C=PL/ST=lodzkie/O=r4pt0r Test Systems/CN=r4pt0r Root CA/emailAddress=admin@example.com
serial:10:00
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
Signature Algorithm: sha256WithRSAEncryption
b0:92:d9:d5:3b:31:38:f6:b8:51:1f:41:e9:f7:d8:e6:33:67:
<quite a lot of data>
ee:c4:eb:19:86:69:00:26:8d:04:7b:97:0b:8f:f5:76
$openssl verify -CAfile intermediate/certs/ca-chain.cert.pem intermediate/certs/[domain].onion.cert.pem
intermediate/certs/[domain].onion.cert.pem: OK
httpd configuration
Finally, we can use generated files to set up HTTPS encryption on webserver. For this, I am using httpd as it is the most common webserver in use. We need following files:
[domain].onion.key.pem – this is private key, that will be used to set up TLS session
[domain].onion.cert.pem – this is certificate that will prove our identity, so web browser will not display any warnings as long as we will have CA certificate installed
ca-chain.cert.pem – this is chain of certificates we created together with intermediate CA, that consists of both CAs – root and intermediate
Below is httpd configuration file, after enabling TLS:
As can be seen above, all necessary files had been moved to tls directory of our hidden service main directory.
Afterwards, one slight change is needed in torrc file:
HiddenServicePort 443 127.0.0.1:666
From now on, we need to use https://[domain].onion to visit our site, as it is now TLS-encrypted and using port 443, which is default for HTTPS. For convenience, we can set up another httpd vhost on different port, that will redirect all HTTP traffic through HTTPS and link it to port 80, so remembering about https in address will not be necessary. But, it is only optional, so I will leave it as an exercise to the reader.
Firefox
From this point it is useful to have Firefox that is not constantly reminding about insecure connection. To prevent this, we should install CA certificate into Firefox. One remark here: as we are going to hack Firefox to trust our certificate, now our whole browsing through that instance of Firefox relies on our CAs private key. So, it is best to not use the same instance for anything else unless you are really sure, the private keys for both root and intermediate are perfectly secure.
To install the certificate, follow the screenshots below:
As a student I was lucky to have unlimited private Git repositories on Github, since they introduced that to their first paid plan. Unfortunately, I don’t have access to educational e-mail anymore, so I won’t be able to renew the service. This leads to a need to have that feature migrated to somewhere else. Some time ago, I installed cgit and gitolite on my single board computer (SBC). But, because of Github, there was no need to use that. Now it seems like a good replacement to Github’s Developer plan.
Few weeks ago, there was interesting event – Tor Project introduced new version of their Hidden Services – v3, which changes length of hidden service address in .onion domain and disables “feature” enabling some nodes in the network to index all existing service addresses. This seems like a good moment to give it a try and check, how fast (or rather how slow) will be the solution providing git through Tor on few-year-old SBC. By the way, I will show, how to configure things with maximum security in mind.
Disclaimer: I am not a person with deep knowledge of inner workings of Tor network, so I strongly encourage you to read thing or two, about how to use it safely. This article might contain errors that might reveal your identity, especially when used together with not-self-owned hidden services.
Prerequisites
Let’s start with summary of what we will need to make Tor v3 work:
tor in version 0.3.2.9 or higher
alternatively Tor Browser 7.5 or higher
for Android: Orbot and Orfox (at the moment of writing this, there is no support in current version of Orbot, so custom compilation is required – I am using Termux to provide tor binary)
httpd or any other HTTP server, able to provide service with only one vhost on separate TCP port
Because of the way, I am planning to configure hidden service in future, it might be a good idea to set up separate Tor browser at this moment, dedicated to this service, if it is going to be production configuration. If this is just an experiment, this advice could safely be ignored. However it is good to know, how to undo any modifications to the browser that will be done in the next parts.
httpd
What we need to do is to listen on localhost, on some random TCP port. Then we will set up httpd to provide only one virtual host on this custom port. It would be perfect to disable any other vhosts as our hidden service will work also as non-hidden service for local users, so if other service is buggy and allows to connect to other local services (see e.g. DNS rebinding), at least address of our hidden service will be compromised.
Furthermore, httpd must be able to traverse to public_html directory, so every directory from public_html up to root must have execute privilege for http user and directory itself as well as its content must be available (or better owned) by http.
After that and after starting httpd, it should be possible to visit http://localhost:666 via web browser and see content of public_html directory. If this is true, we can move on to tor configuration.
Now, on the first startup of tor, it should create keys for our new hidden service. We can look into /etc/tor/hsv3/hostname to see the .onion address. It is good idea to set key files and hostname file as readable as only user running tor service. In case of service started by systemd, this will probably be tor by default.
After starting tor service (systemctl start tor in case of systemd), we can check if everything works properly by visiting our hidden service with tor-enabled browser (using tor 0.3.2.9 or higher). That’s it.
Firefox for Android
At the time of writing this article there is still no upgrade for Orbot app, providing GUI interface for tor. Because of that, it might be required to use ordinary Firefox to use tor as a proxy, which is generally bad idea for connecting to any hidden services, because of privacy and anonymity. Fortunately, we can live with revealing our identity to ourselves 🙂 so we can do it only this single time.
What we need to change are following configuration options, available under about:config page:
network.proxy.socks to localhost
network.proxy.socks_port to 9050
network.proxy.socks_remote_dns to true
network.proxy.socks_version to 5, if any other (should be default)
network.proxy.type to 1 (0 means no proxy, 5 is system proxy)
Conclusion
Now we are ready to use our hidden service, from both desktop and mobile. Still, we use only HTTP protocol, which is not a big problem, as tor already provides encryption. Neverheless our next goal would be to configure HTTPS. And then we will configure client authentication for ultimate security of our hidden service.
After part number four, we already have ELF file, storing all the data we found in firmware image, described in a way that should make our analysis easier. Moreover, we have ability to define new symbols inside our ELF file. The next step is to add support for our custom architecture into objdump and this is what I want to show in this tutorial.
Finding best architecture to copy
If we want to set up new architecture in objdump code, we need to learn interfaces that need to be implemented. It would be easier if we can use some existing code to do so. After some looking into the binutils’ code I learned that what is of special interest are bfd and opcodes libraries. They contain code dedicated to particular architectures. The first one seem to be related to object file handling (which in our case is ELF), so we should not tinker with it too much. Second one is related to disassembling binary programs, so is what we are looking for.
I did some quick examination of source code related to popular architectures and it seems not to be easy to adjust to our needs. Architecture I found to be best suitable for modification is Microblaze. Its source seem to be quite well-written, clean and short. Also from my research of architecture name for LKV373A (part 2, failed by the way) I also remember it is quite similar to the one present in LKV373A, so it is even better decision to use it.
Compiling objdump for target architecture
At first it is useful to learn how to compile objdump, so it will be able to disassemble program written for our target. Microblaze is not really a mainstream architecture, so there aren’t many programs compiled for it available online after typing 'microblaze program elf' into usual search engine. However, I was able to find 2 of them, so I was able to verify that compilation worked. If you can’t find any, I uploaded these to MEGA, so they can serve as test cases. First one is minimal valid file, the other one is quite huge.
Compilation is very easy. The only thing that needs to be done beside usual ./configure && make && make install is adding target option to configure script. So, the script looks as follows:
./configure --target=microblaze-elf
Of course, install step can safely be skipped as well as compilation of other tools, beside objdump. objdump itself seem to be built using make binutils/objdump. However it can’t be build successfully using that shortcut, so whole binutils package must be configured the way, everything not buildable is excluded from the build.
Setting up own architecture
Next step is to add support for our brand new, custom architecture to binutils’ configuration files and copy microblaze sources, so they will simulate our architecture, until we will write our own implementation. Then it should be possible to test objdump again, against our sample microblaze programs and disassembly should still work.
Even without any modification to binutils’ source or configs, it should be possible to configure it for any random architecture. The only constraint is format of the target string: ARCH-OS-FORMAT, where FORMAT is most likely to be elf. So, if we pass lkv373a-unknown-elf as target, it will work. -unknown part is usually skipped and this will not work. If we need it to work, config.sub must be modified. config.sub is used to convert any string, passed to configure into canonical form, so in our case lkv373a-unknown-elf. If it detects, that it is already in canonical form, it does nothing.
Final configure command will be slightly more complex, as we have to disable some parts, that are not of our interest and requires additional effort to work:
Although passing something random as target option works on configure stage, it will obviously fail on make stage. What make is doing at first is configuring all the sublibraries. What is of our interest is bfd and opcodes. And the first one fails. So this is the first problem, we need to get rid of.
bfd/config.bfd
The purpose of this file is to set some environment variables depending on target architecture. If it does not know the architecture, it returns error to caller, which is probably bfd’s configure script, called by make. According to documentation in file header, it sets following variables:
targ_defvec – default vector. This links target to list of objects that will provide support for ELF file built for specific architecture (stored in bfd/configure.ac)
targ_selvecs – list of other selected vectors. Useful e.g. when we need support for both 32- and 64-bit ELFs. Not needed here.
targ64_selvecs – 64-bit related stuff. Used when target can be both 32- and 64-bit, meaningless in our case.
targ_archs – name of the symbol storing bfd_arch_info_type structure. It provides description of architecture to support.
targ_cflags – looks like some hack to add extra CFLAGS to compiler. We don’t care.
targ_underscore – not sure what it is, should have no impact on our goals (possible values are yes or no)
To sum up, what we need to do on this step is to define default vector, we will later add to configure.ac and set name of architecture description structure. The structure itself will be defined later. Finally, I ended up with the following patch:
Unfortunately, as we did modifications to .ac script, we now need to rebuild our configure. From my experience, any tinkering with autohell, after solving one problem, creates 5 more. We need to get into bfd directory and reconfigure project:
cd bfd
autoreconf
Now, if it worked for you, you should definitely go, play some lottery 🙂 . For me it said that I need exactly same version of autoconf as used by binutils’ developers. Because autoconf is so great, probably what I will show now is completely useless for anyone, but hacks I needed to do are at first to add:
20 m4_define([_GCC_AUTOCONF_VERSION],[2.69])
to the beginning of configure.ac file. Then bfd/doc/Makefile.am contains removed cygnus option at the beginning, in AUTOMAKE_OPTIONS, so we need to remove it. After that doing automake --add-missing, as autoreconf suggests, and then again autoreconf should solve the problem. But, as I said, this will probably not work for you. I can only wish you good luck.
(if were following the steps, you might have noticed that autoconf complained about not being in version 2.64 and we overridden version from 2.69 to 2.69 and it worked 🙂 , don’t ask me, why, please!)
After this step, compilation should start (and obviously will fail miserably on bfd as it misses few symbols). Now its time to make bfd compilable.
bfd/elf32-lkv373a.c
This file is meant to provide support for custom features of ELF file. As we don’t have any, we can safely do nothing here. Good template of such file is elf32-m88k.c as it does exactly this.
One thing that seem to be important here is EM value of architecture described. EM is an enum used in ELF file to define target architecture, so it might be required to adjust in our new elf32-lkv373a.c file. By the way definition of this value have to be added to include/elf/common.h:
It might also be a good idea to add it to elfcpp/elfcpp.h. To make the file compile, it is necessary to add following to bfd/bfd-in2.h as value of bfd_architecture enum:
2398 bfd_arch_lkv373a, /* LKV373A */
bfd/archures.c
As we declared bfd_lkv373a_arch as symbol with CPU description structure, we now need to add this declaration to archures.c, as this is the file, where it will be used. We have to add:
Similar situation is in targets.c file. Here we have to provide declaration of our vector as bfd_target. This will be another structure, which seem to be generated automatically, so we should not care about it.
This last file, we need in bfd, provides bfd_arch_info_type structure and… that’s it! Can be easily borrowed from cpu-microblaze.c with only slight modifications. One thing that needs explanation here is section_align_power. As far as I understand it, it is power of two to which the beginning of the section in memory must be aligned. It should be safe to put 0 here, as we are not going to load our ELF into memory.
This should close the bfd part of initialization. As you can see, there was no development at all to be done here. Let’s now go to opcodes library.
opcodes/configure.ac
At first we need to define objects to build for LKV373A architecture in opcodes library. This is quite similar to what we had to do in configure.ac of bfd library.
Hopefully, -dis file will be enough to be implemented. I’ve made a copy from microblaze configuration. The same way we will copy whole source file and any related headers in the next step.
Now, similarly to bfd’s configure.ac, we have to reconfigure it. And again, nobody knows what errors we will encounter.
opcodes/disassemble.c
The only thing that have to be done here is to set pointer of disassemble function. For this following snippets should be added:
This is, where real stuff will happen. As our goal, for now, is not to make implementation of LKV373A architecture, but rather set everything up, so objdump will build, we can copy source file from microblaze-dis.c. It is also required to copy headers, related to MicroBlaze, used by this file, so:
opcodes/microblaze-dis.h
opcodes/microblaze-opc.h
opcodes/microblaze-opcm.h
And change include directives in them to link to lkv373a file, rather than microblaze ones.
Now, optionally we could change names of any symbols referring to name microblaze, but this should not be required, as original microblaze files should not be included in the build. The only change than need to be done is print_insn_microblaze into print_insn_lkv373a, as this is what we added to disassemble.c.
You should now be able to compile working objdump with LKV373A support (of course with wrong implementation, for now). We can now verify that everything works on slightly modified ELF file for MicroBlaze architecture (EM field must point to LKV373A – value must be 0x373a). Well done!
NOTE: all the steps, done till now are available on tutorial-setup tag in repository on Github.
Functions to implement
Now, finally the real fun starts. Bindings between opcodes library and objdump itself, require at least print_insn_lkv373a to be implemented.
What should happen inside this function is quite simple and can be described in following steps:
Gets bfd_vma and struct disassemble_info (called info below) as parameters
Read raw data containing instructions using info->read_memory_func
Call info->memory_error_func in case of any errors
Use info->fprintf_func to print disassembled instruction into info->stream
Optionally use info->symbol_at_address_func to determine if there is any symbol declared at address decoded from instructions
If symbol exists, call info->print_address_func
Return number of bytes consumed
Following is some documentation, I wrote for easier implementation (mostly translated inline comments), of functions to be called:
/** *\briefFunction used to get bytes to disassemble * * \parammemaddr Address of the current instruction * \parammyaddr Buffer, where the bytes will be stored * \paramlength Number of bytes to read * \paramdinfo Pointer to info structure * * \return errno value or 0 for success*/int (*read_memory_func)
(bfd_vma memaddr, bfd_byte *myaddr, unsignedint length,
struct disassemble_info *dinfo);
/** *\briefCall if unrecoverable error occurred * * \paramstatus errno from read_memory_func * \parammemaddr Address of current instruction * \paramdinfo Pointer to info structure*/void (*memory_error_func)
(int status, bfd_vma memaddr, struct disassemble_info *dinfo);
/** *\briefPointer to fprintf * * \paramstream Pass info->stream here * \paramchar Format string * \param ... vargs * * \return Number of characters printed*/typedefint (*fprintf_ftype) (void *, constchar*, ...) ATTRIBUTE_FPTR_PRINTF_2;
/** *\briefDetermines if there is a symbol at the given ADDR * * \paramaddr Address to check * \paramdinfo Pointer to info structure * * \return If there is returns 1, otherwise returns 0 * \retval1 If there is any symbol at ADDR * \retval0 If there is no symbol at ADDR*/int (* symbol_at_address_func)
(bfd_vma addr, struct disassemble_info *dinfo);
/** *\briefPrint symbol name at ADDR * * \paramaddr Address at which symbol exists * \paramdinfo Pointer to info structure*//* Function called to print ADDR. */void (*print_address_func)
(bfd_vma addr, struct disassemble_info *dinfo);
For easier start of development, this commit can be used as template. You can find effects of implementation according to this description on lkv373a branch of my binutils fork on Github. After this step, you should have working objdump, able to disassemble architecture of your choice.
Alternative way
According to binutils’ documentation, porting to new architectures should be done using different approach. Instead of copying sources from other architectures, developers should write CPU description files (cpu/ directory) and then use CGEN to generate all necessary files. However, I found these files way too complicated comparing to goal, I wanted to achieve, therefore I used the shortcut. In reality, however, this might be a better way, as the final result should be the support for new architecture not only in objdump, but also in e.g. GAS (GNU assembler). If you want to go that way, another useful resource might be description of CPU description language.
Plans for the future
As I am now able to speed up reverse engineering of both instruction set and LKV373A firmware, I am planning to create public repository of my progress and guess operations done by some more opcodes as I already know only few of them. So, I will probably push some more commits to binutils repo as well. I hope this will enable me to gain some more knowledge about LKV373A and allow, me or someone else, to reverse engineer second part of the firmware, which seem to be way more interesting that the one, I was reverse engineering till now.
As we should now be able to follow any jump present in the code, it is now time to make analysis more automatic. My target tool for that purpose will be objdump. However, we still have firmware image as raw dump of memory. To be able to use objdump easily, we need to pack our firmware into some container understandable by objdump. Most obvious choice is ELF (Executable and Linkable Format) and this is what I am going to use.
For the purpose of packing data into ELF, I’ve made Python library that makes it easier. For now, it is able to split firmware image into sections, like .text or .data, so objdump will be able to disassembly only the parts of firmware that are in fact a code. Moreover, it can define symbols inside the binary, so it is possible to store information, where certain functions starts and ends, same for any variables, like strings. As of now, there is no CLI interface for the program. If it turns out that such interface is necessary (like for addition of many symbols), it will be added.
Library code can be downloaded from Github. Currently, any LKV373A-specific modifications to this library is stored on branch lkv373a, to not rubbish main – master branch. Throughout this tutorial, I assume, we are using code on this branch, so there might be some LKV373A-specifics, especially regarding enum types (i.e. processor architecture enum).
At this point, I need to warn, that I am not going to describe internal structure of ELF file, nor any features that might be visible from outside, like sections concept, so if you are not familiar with them, it is good time to learn about them, as it might be very difficult to understand, what I am writing about. There are many good resources explaining them. Ones I was using are: this blog post and this documentation.
Creating new ELF
Example code that creates brand new ELF file is as easy as:
This, at first does all necessary imports, then creates new ELF object in line 6, and, finally, converts it to bytes object and immediately writes to file descriptor. That’s it!
After this, you should get valid, empty ELF file for architecture called lkv373a, which, obviously does not exists and no other program know how to handle, but we are going to change that in future.
While creating ELF object, few things can be defined, in addition to architecture id. They are all described in documentation, I will mention near the end of this tutorial. You are also free to dig in structure of ELF object. There is no encapsulation in it and structure validation is very permissive, so even completely broken ELFs could be produced, if needed.
Adding section
Next step is to add some sections to our ELF file.
At first, I am extracting them from firmware image and then inserting them to ELF object. append_section is a handy wrapper to low-level modifications that must be done on ELF structure, hidden under what we can see as ELF instance (these low-level structures are, however still available to the user as ELF.Elf member).
Modifying section attributes
Ok, so now we have sections in our ELF file, ready to save to disk. Before that, one thing can yet be done: setting proper attributes. They tell readers, if program is able to write or execute sections of memory, among other features, I am going to ignore here. This might be useful, as some readers might be confused about what is code (text) and what is data. In our case, we have two text sections (.irq and .text), so we are going to set them executable flag (SHF_EXECINSTR). Furthermore, we will set SHF_ALLOC flag for any section that is going to be loaded into memory (so all of them).
Segments are another concept, existing beside sections. They are stored in program header of ELF file and are somehow linked to section data. They allow to define another set of attributes to areas in memory. I don’t think they will be required to define, to perform analysis in objdump, but since at least one such program header, defining segment must exist in ELF file of type executable, there is interface similar to this for sections.
To define new segment, based on .text section, you can issue:
elf.append_segment(txt_id, flags='rx')
This also marks the segment as read and executable, but not writable.
Loading existing ELF
Loading existing ELF can be easily done from file with:
newelf, b = ELF.from_file('lkv373a.fw.elf')
Alternatively, it can also be loaded from bytes object:
fd = os.open('some.elf', os.O_RDONLY)
b = os.read(fd, 0xffff)
os.close(fd)
manualelf, b = Elf32.from_bytes(b)
In latter case, I assumed that os library is already imported into python.
Adding a symbol
This is very useful for making analysis of code. New symbol can be added using calls like:
First call defines function of length 0x44 in .irq section. To do this, ID of .irq section must be known. Luckily, we want to add symbol at the beginning of the section, so as offset, 0 was provided.
In the second case, we also want to define a function, but now we only know absolute address of the function (0x9b9f8), but what we need to pass is offset in .text section. To achieve this, we need to subtract address of the start of .text section (0x7d100).
In the last example, we define a string as an object of certain address and length. Both address and length are computed by subtracting absolute addresses. This symbol will be marked as local, which is default behavior for append_symbol function.
Library documentation
There are many more things possible to do using makeelf library. What I showed here is mostly, what is possible using high-level wrappers, doing many things under the hood. But as there is also low-level interface, virtually anything is possible.
To make exploring interfaces easier, I’ve made doxygen documentation for most of the library. It can be found on my server, here. Feel free to use the library for anything you want.
Conclusion
The library presented here should allow us make one step further to easy to use reverse engineering environment. It will by the way allow to store new findings in easily-modifiable Python scripts.
What I showed in examples to library interfaces split LKV373A firmware image into 4 sections. At this moment I already know that there are at least 6 sections, where code and data are in two parts (forming ICDCDS layout, where I-irq, C-code and so on). Also there should be some more symbols possible to place at this moment.
If I succeed in porting objdump, or any other tool able to disassemble ELF file, next step would be to publish Python script, utilizing library presented here, that annotates LKV373A firmware. So stay tuned, I hope there will be many further interesting findings throughout this reversing process!
As I wrote in previous part, my choice is in fact reverse engineering instruction set. The goal of this post is not to reverse-engineer whole instruction set, because even in RISC architectures some of the instructions might be quite rarely used.
Before starting the actual reverse engineering, place where such analysis would be easiest should be identified. Then it might be possible to use often repeating patterns to guess instructions that the pattern consists of. The result of this tutorial will be description of opcodes related to jumping and few other often used opcodes, mostly related to memory operations.
Identifying target
As written above, first step is to find good target. As was shown in previous article, it is possible to find references to constants mixed into code.
In the picture on the right one especially interesting information can be seen – it seems that as an operating system FreeRTOS was used. FreeRTOS is open source project, so its source code can be downloaded.
This information could be later possibly used to link compiled code with FreeRTOS source code. Let’s look into source code to find some useful location. As we already know the encoding of opcode for an operation on strings, finding some string in code might work. I was able to identify one such place in tasks.c file. It is shown on snippet below:
4110 if( ulStatsAsPercentage > 0UL )
4111 {
4112 #ifdef portLU_PRINTF_SPECIFIER_REQUIRED4113 {
4114 sprintf( pcWriteBuffer, "\t%lu\t\t%lu%%\r\n", pxTaskStatusArray[ x ].ulRunTimeCounter, ulStatsAsPercentage );
4115 }
4116 #else4117 {
4118 /* sizeof( int ) == sizeof( long ) so a smaller4119 printf() library can be used. */4120 sprintf( pcWriteBuffer, "\t%u\t\t%u%%\r\n", ( unsignedint ) pxTaskStatusArray[ x ].ulRunTimeCounter, ( unsignedint ) ulStatsAsPercentage );
4121 }
4122 #endif4123 }
4124 else4125 {
4126 /* If the percentage is zero here then the task has4127 consumed less than 1% of the total run time. */4128 #ifdef portLU_PRINTF_SPECIFIER_REQUIRED4129 {
4130 sprintf( pcWriteBuffer, "\t%lu\t\t<1%%\r\n", pxTaskStatusArray[ x ].ulRunTimeCounter );
4131 }
4132 #else4133 {
4134 /* sizeof( int ) == sizeof( long ) so a smaller4135 printf() library can be used. */4136 sprintf( pcWriteBuffer, "\t%u\t\t<1%%\r\n", ( unsignedint ) pxTaskStatusArray[ x ].ulRunTimeCounter );
4137 }
4138 #endif4139 }
4140 4141 pcWriteBuffer += strlen( pcWriteBuffer );
If we look at code before the snippet, we can see that the function is surrounded with ifdefs and is meant to be turned on only for demo purposes. Also searching for complete formatting string from sprintf function above fails on LKV373A firmware. Fortunately it is present in code and happens to have some modifications. One of them is the formatting string we were searching for. I was able to identify them starting at offset 0xbab2f. You can see it on hexdump. What is a bit surprising is that there are four such strings, while we expected only two in whole code. But "IDLE" string after them is confirming that it must be tasks.c module.
Now we can use method shown on previous tutorial about processor identification to find references to these strings. Finally I found usages of offsets 0xbab5c and 0xbab73 (marked in green and blue) near offset 0x91fd4.
At this moment, we have machine code and source code that is very likely to be compiled one to one into this machine code. We can also see here very useful side-effect of open source popularity: we have a system that has quite unusual function and is using open source software. So we can conclude that we can be almost sure that any random part of code also has open source software in itself.
Code patterns
As we already have quite reliable anchor for our analysis, we could try identifying more opcodes. But, to make things easier, I want to go the pattern matching way. Whichever architecture you analyze, you see some patterns that are same or almost the same on any architecture. This is especially true on RISC architectures, as they have very limited set of functions, so compiler have to join two or more instructions to get desired high level functionality. It this section, I will describe some of such patterns, I was able to identify and decode in LKV373A firmware. They are:
Function call
Function prologue and epilogue
Read-only memory access
Compare and jump
Variadic functions
Following is short description of the above patterns.
Function call
This is main element of ABI (Application Binary Interface) from the point of view of a programmer. Therefore it should also be well-known, even to people not involved in assembly programming or reverse-engineering. It is all about the method of passing arguments, before a call to a function.
Let’s see how such a call looks on MIPS architecture:
As we can see in case of MIPS, arguments are passed in registers named a0, a1, a2 and so on. Then address of function to call is loaded to t9 and jalr (jump and link register) is performed. Usually, in case of ABI, where arguments are passed in registers, when number of arguments is greater than number of such registers, they are passed through memory (i.e. stack).
Function prologue and epilogue
When performing a call to different function, state of the processor have to be preserved, so after the call it is again the same as before (with exception of few registers, used e.g. for return value passing). This operation may be done by caller or callee. Let’s look again at MIPS code to see how it works there:
I think there is nothing special to comment here. The opposite happens on function epilogue and additionally, immediately after that return instruction should appear.
Read-only memory access
This one is already partially analyzed on previous part of this series. It is usually appearing where some strings need to be used in code. Strings written in code as literals are stored in part of the memory where they shouldn’t be modified. Then theirs addresses are computed using some base register, or directly if code is not relocatable. This is how it works on MIPS:
lwt9,-30404(gp)
But, as we can see on previous post, we have something missing on our mysterious architecture. There, address was computed as $3=$3+0xbadadd. So, we should expect that register used should be set to something before that.
Compare and jump
This is after calling conventions, another extremely popular scheme. It usually consists of two opcodes. At first two numbers are compared, and some flags in processor are set. Then based on the state of one of the flags, jump is performed, or processing is continued if flag has value different than we expect. This time, let’s see how it works on x86 platform, as MIPS uses a bit different philosophy:
cmp [ebp+arg_4], eaxjnzloc_404CEB
On x86 cmp instruction causes the subtraction of two parameters, without actually storing the result, but only updating the FLAGS register, so it is known if the value is zero, or the overflow happened, and so on. Then based on flag value (in this case if zero flag is not set), jump occurs, or not.
Variadic function
Variadic function is function that can get variable number of parameters. The most popular example of such function is printf. It accepts format string and parameters, which number depend on format string. On system, where parameters are passed through registers, I expect it to get format through register and rest of parameters through stack or dedicated structure, so generally memory. Once we know how constants are accessed, it should be quite easy to identify, as it most likely will get format string as one of the first parameters, and somewhere close to that parameters should be stored, one after another.
Identifying patterns
Now, as we know what pattern we will look for, it is time to find them in code and guess functions of particular opcodes.
Read-only memory access
Let’s look at area near reference to our format string:
After splitting instruction words into parts and decoding opcode, register and immediate values, we can see that string’s address is based on register $4 and stored also in register $4. If we look few lines upwards, we can see that register $4 is computed based on register $0 and offset 0x0b (marked in orange). Register $0 is often used to always store value 0. Now, if we look at original offset of string in firmware, we can see that it is 0xbab5c! So that instruction must store immediate value in register’s higher half. Therefore we just guessed function of opcode 06. Later this opcode will be described as lh (load high).
By the way we also discovered that almost surely firmware image is mapped to address 0 after loading to operational memory or more likely mapping EEPROM to address space.
Compare and jump
In the snippet above, another one thing is quite interesting. And weird at the same time. There are few instructions that seem to not contain any register encoded and have weird uneven offsets, often with quite low values. At this moment my theory is that shorter ones are some kind of jumps (like 0x91fd8) and longer ones are function calls (like 0x91fd0). Then it is time to try to find compare and jump pattern.
If we are right, then opcodes like 00, 04 are jumps and 01 means a call. After this section we should also tell conditional and unconditional jumps apart.
Ok, so we now need to go back to source code and find some good candidate for conditional jump. It should be as close to format string as possible. If we look at the snippet from Identifying target section, we can see one such check on line 4110. It checks for value being greater than zero. Going upwards a little bit, we encounter one 04 opcode, immediately preceded by 2F instruction:
Now, if we look at occurrences of 2F opcode, we can spot that it is appearing usually near 04 opcode. However we cannot tell that they appear in exactly this order which is quite weird. On the other hand if we look at register this particular occurrence uses, it is quite likely it is compare opcode.
If we assume that 2F is cmp (compare) and 04 is jg (jump greater), we see that this more or less matches behavior we expect from the code immediately preceding sprintf from FreeRTOS source code.
However we still miss one information: what does the offset mean. If it is jump instruction, then we cannot jump 10 bytes ahead, because we would land in the middle of instruction. If we look again at source code, we can see that our jump should not go very far forward, so value should also not be too high. We can also exclude usage of register as address, because it would be register $10, which is not set anywhere near jump.
Having no other idea, I did an experiment. I multiplied jump value by 4, because length of instruction is always 4 and added next instruction address to result. Then I checked what is there and… bingo! It jumped above the sprintf call and ended up immediately after it. Some time later, I discovered that it is not completely truth. It happens that real formula is:
addr = imm * 4 + PC
Where imm is instruction argument and PC is program counter before executing the instruction (so address of jump opcode).
The question still is how more sophisticated compares are performed, because every one I’ve seen is just telling which value is greater. As there does not seem to be any flag in instruction, maybe there is no other option and to do that some arithmetic operation must be done to bring them to greater than operation?
Function call
Another thing, we can learn near the sprintf function is how parameters are passed to function. Signature for sprintf is:
int sprintf(char *str, constchar *format, ...);
So after analyzing its call, we should know how at least two first parameters are passed. Let’s see how it looks like in machine code:
Here, another interesting detail appears: last instruction setting the registers appears after the actual call. This is perfectly normal and can also be found on MIPS architecture. Its purpose is to allow concurrent execution of the two instructions.
Variadic functions
Now, if we scroll a bit upwards, we can see some interesting bunch of 35 opcodes. We know, that our call should have more than 2 parameters and thanks to format string we can tell that there should be exactly 5 extra parameters. Now if we count number of 35 opcodes, we see that these numbers match.
So, we can tell almost for sure that opcode 35 gets value of third register parameter and stores it at address computed as follows:
addr = reg1 + reg2 + imm
So e.g.
*($0 + $1 + 0x0c) = $26
Unfortunately with only that information, we can only guess the order of parameters, i.e. if $4 is first parameter or last one.
For future, we will denote opcode 35 as sw (store word).
Function prologue and epilogue
As we know exactly at which address the call will land, we can try to decode function prologue and epilogue, so what happens just after the call and just before returning back. Let’s see how such part of code looks, for example of sprintf function:
By the way immediately after sprintf there is strlen function, that is also called by function we are analyzing. So we see that registers stored with sw instructions are then recovered by opcode 21. Then we can safely assume it is reverse and denote it as lw (load word).
And we see that last jump in function is done by opcode 11, so we can denote it as ret (return). I still don’t know what is the meaning of its parameters. If we use standard decoding, it would be:
ret $0, $0, $9, 0x00
But I have no proof that it is the real meaning. I only see that sometimes this third hypothetical register have different value, but usually it is $9.
From my experience register saving, we see here is done to stack. If in this case it is also true, then we have two options for stack pointer: $1 and $31. Some more investigation must be done to tell which one is SP.
Other methods
We can also try to find constants other than strings. Then we have a chance, that there will be some arithmetic operation going on with them. Personally, I haven’t tried that approach, so I can’t show any example.
Another method might be finding references to some known structures. We can see one such structure in the function, we analyzed (TaskStatus_t). This is also left as an exercise to the reader.
Conclusion
Main focus of the analysis was on branching. As shown, we know quite a lot about not only branching, but also whole ABI. Now it should be possible, as soon as main entry of the system is found, to discover complete flow of the program.
We now know that first parameters are passed in registers $3, $4 and possibly so on. After analysis of function prologue and epilogue, we also know that here callee is responsible for preserving register values.
To sum up, we already know following instructions:
00: jmp off
01: call off
03: j? off
04: jg off
06: lh $r1, $r2, imm
11: ret
21: lw $r1, $r2, $r3, imm
2A: la $r1, $r2, imm
2F: cmp $r1, $r2
35: sw $r1, $r2, $r3, imm
We also know that in this architecture, there is mechanism of slots, identical with that in MIPS. Together with fixed-sized instructions and opcode and register field lengths, it is really similar to MIPS. Unfortunately it is not exactly the same, so reverse engineering of ISA have to be continued.
Unfortunately, after doing the research described here, I see that tools I used are not enough to do more reversing efficiently. So, before doing one step forward, I have to find a way to introduce more automation to the process. As soon as I succeed with this, I will write next part, so stay tuned!
In the first part, I identified two main problems for further development. First one is unidentified checksum, appended to the end of firmware image. Second one is unknown LZSS-like compression algorithm, used to compress machine code of application processor’s firmware.
Encoder firmware
The thing that till now was more or less unexplored is encoder firmware. LKV373A consists of two processors – application processor using the firmware analyzed in first part and encoder, which reversing I am going to push forward with this article.
Target frmware that I am working on is called LKV373A_TX_V3.0c_d_20161116_bin.bin and is obtainable from danman’s firmware collection.
At first, encoder firmware looks completely different than ITEPKG. The latter was completely structured. This one starts with some data fields, then there is big block of randomly looking data, interlaced with some strings. At the end is familiar SMEDIA02 structure. And this block of “random” data is our target.
First idea I had was running binwalk with -A switch to look for some known opcodes. With no luck. But if we look closer, it seems that this in fact is some kind of machine code.
Finding target
Ok, what we know now is that in firmware image there is a region with many strings written one after another, like on the first hexdump. In another place there is quite a lot data where some of the words are similar to the other. One such fragment can be seen on another hexdump.
So, we can guess, there is code region around 0x81680 and code region near 0xbbc70. Now the question is: how to prove it.
Before proving our hypothesis, one thing is worth noting. If we are right and bytes here are really machine code, then we can be almost sure that instructions are always (or almost always) encoded into 32-bit numbers. That fact is very useful, when we will try to find candidate architectures to test against our characteristics.
How to prove it?
Fortunately, we can see one interesting candidate string. There is some debug message marked in red on data region. If our guess is valid, we should find some reference to it and moreover we can expect there would be only one reference to that particular string.
But now, there is another problem. How to find reference to string, saved somewhere in memory, at offset we have no chance to guess? Now the experience with any machine code might be useful. Usually assembly mnemonics are translated more or less in a form they are written by programmer. So, if we have hypothetical instruction:
add $1,$2+0x1234
Chances are it will be translated to something like:
0x21 0x01 0x02 0x12 0x34
Where lengths of any of these fields and maximum offset possible to write depends on particular architecture.
We can make another assumption. Usually if persistent memory (like EPROM) finally is mapped to operational memory, usually nobody designs device the way, where start of some section is not at address, padded to i.e. page size. So, if we are lucky final address of string in memory should have same least significant bits as our firmware image.
Then, connecting the dots, we can try searching firmware for let’s say two least significant bytes of string offset (0xbcc8). One more remark here: we still don’t know endianness of the processor. And what is worse, after reading firmware format description we don’t know it even more. So we have to check both variants.
Of course, it might happen that in case of different firmware for completely different architecture, all that might fail. There is only one advice to succeed with such analysis: be creative!
The first impression is that I was wrong. There are in fact 5 hits. But… If we look at general firmware structure, we can see that most of the hits are inside SMEDIA container, which as I described in previous post about reverse engineering LKV373A consists of mostly compressed data, and unlikely has any uncompressed code. So, success! We have only one hit inside area we suspected to be code.
By the way we know more than the area is in fact code section. We see that numbers in machine code are stored as big endian. And as we can, we can use 16 bits as offset here. That further narrows number of possible architectures. For instance, popular ARM architecture is little endian, so it is not likely to be used in this case.
I’ll leave as an exercise verification that we are right on another strings present in this firmware.
Further guesswork
Next thing that might ease things a bit is finding out what is a whole format of instruction, that is:
how many bits are used for opcode?
how many operands we can use?
how many bits encode an operand?
When I was doing the work I was not experienced enough to see it from plain word we already have. Then my approach was to guess meaning of 0xA8 opcode we see. As we know that it points to string in possibly write-protected memory, we should not expect it to perform any arithmetic or logic operations. It is rather likely to get pointer of string and store somewhere. Since architecture gives us only 32 bits to use per instruction it is not likely to copy memory somewhere. For me most probable operation is computation of string address, based on some base register and storage in another register. Therefore, I will call the operation: “Load Address” (abbreviated “la“).
Because that information alone does not give us any more hints, but can be used to match our mysterious architecture to one of the known ones, it is high time to do even more guesswork.
Specification matching
Wikipedia has quite useful page, that might help us. But, at first it is good to check some popular ones to see if they match.
As an example, I will use MIPS architecture. It is often used in embedded systems and was quite popular in routers, especially in past, when they were not running Linux. Furthermore it is the most popular architecture that might be big endian, so seems like perfect first check.
Now, we can find some hints on Wikibooks. On MIPS there is instruction format that might match our characteristics.
I Format
opcode
rs
rt
imm
6 bits
5 bits
5 bits
16 bits
After search in MIPS manual (its name is “MIPS32™ Architecture For Programmers Volume II: The MIPS32™ Instruction Set” and is probably no more available on original source, but rather some random sites, so no link here), we can see that instruction that has first 6 bits of a word matching our target (0b101010) is “Store Word Left” (swl), so it is not really what we expected.
Now, we have to move on and check another architecture, and another, and another, until we decide we can’t find anything matching. Or finally we will find something. In my case the result is fail. I checked:
MIPS
ARM
ARC
RISC-V
PowerPC
SPARC
MicroBlaze
Xtensa
IBM S/390
Motorola 68k
DLX
Mico32
LEON
OpenRISC
NIOS II
m32r
And nothing matched. So architecture here is something really uncommon. Because the device is probably performing HDMI signal processing beside being normal processor, it is likely it is in fact soft processor, programmed into some FPGA. Therefore it is likely, finding the architecture documentation will be impossible.
Decoding instruction format
Then we can try another approach. Let’s play a little bit and try to decode rest of instruction using above MIPS I format. First 6 bits means instruction opcode (0x2A or 0b101010), next are two 5-bit fields meaning destination and source register, so we have register 3 for destination and register 3 for source. That definitely makes sense!
So, our hypothesis is that we have 6-bit opcode, 5-bit operands and 16-bit offset encoding. Here, again proving that fact is left as an exercise for the reader.
Going back to our target instruction, we now know that it is executing code like that:
$3 = $3 + 0xbcc8
We will denote this instruction as:
la $3,$3+0xbcc8
What we know?
Now, to sum things up, we learned following things about device’s architecture:
32-bit (4-byte) static instruction length
big endian
6-bit opcodes (maximum number of opcodes is 0x40)
5-bit operands (32 general purpose registers available)
indirect addressing of up to 65535 bytes (0xffff)
What next?
As we know quite a lot information about the architecture, we can choose one of the two ways: try to find out what is the name of the architecture and find its documentation (not likely to succeed) or reverse engineer as much instruction as we can. My choice probably will be the latter and if I will succeed in pushing knowledge forward, I will try to describe the process in next article in series.
NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 25th July 2017.
As I promised some time ago, now I am going to describe process of pre-personalization of a JCOP card. JCOP is one of the easier to get JavaCard-compatible cards. However they cost a bit. The problem with the ones available from eBay sellers is lack of pre-personalization. Ok, there are some advantages of buying not pre-personalizaed card, like ability to set most of its parameters, but by the way it is quite easy to make such card unusable.
Online resources, as I mentioned in the first part of the tutorial, are not very descriptive. They say that there is such thing like pre-personalization and it has to be done before using the card, flashing applets, using them and so on. There is only one source that helps a little bit. Someone has written script for the process. However there are two problem with the script. The first one is that it is written in some custom language and internet does not know about the interpreter, it is probably something provided by NXP – manufacturer of JCOP for its customers and neither me nor (probably) you, reader, are their customers. The consequence is that we can have script in custom language, with commands like ‘/select’ or ‘/send’. Fortunately, documentation of ISO 7816 (smart card connection), allows to decipher this. So this problem could be finally solved. Another problem is lack of command values and addresses in memory, so we do not know where and how to read/write/execute anything. After really deep search in Google, finally, I was able to find out all the missing values, so this tutorial could be written.
Process overview
Ok, after this way too long historical introduction, let’s see what will be needed. I assume, you are already able to communicate with your card using raw PDUs. If you don’t, up to this point there are quite a few resources to learn from, so I will not describe this. The most important thing here is to have so called transport key (KT). If you do not have it, go get it now. Seller should provide it to you, and if he did not, you are stuck, since the first step requires this key.
So, basically steps will be as follows:
Select root applet with Transport Key
Boot the card
Read/write some data
Protect the card
Fuse it
Easy? Easy. But only if you know some hex numbers. Ok, here, one big WARNING: the last step is irreversible and can be done by mistake quite easily, so think twice before sending anything, and if you are sure, that you are done, think twice again, before issuing it.
Pre-personalization, step by step
At first, we use Transport Key to select proper applet. Format of SELECT command is as below:
CLA=00 INS=A4 P1=04 P2=00 Lc=10 (...)
Where CLA is always zero, INS means SELECT, P1, according to ISO7816 means selection by DF name and Lc is length of KT. After that, key have to be appended. Of course, whole APDU is to be given to communications program as binary values or hex values only.
What now follows is specific to NXP cards only and is mostly undocumented publicly. First of such commands is BOOT command. Its format is as follows:
CLA=00 INS=F0 P1=00 P2=00
Now double care have to taken, because FUSE command should be available after this point and its APDU consists only of zeros, so every mistake might make the card unusable, since security keys are generated randomly for each card.
Reading memory
Now the most important values to read are called CM_KEY and GPIN in memory dump, I shared in the previous post on the topic. First one starts at offsets: 0xc00305, 0xc00321 and 0xc0033d and are 0x10 bytes long. The other one can be found at offset 0xc00412 and by default should be 5 bytes long. However maximum length is also 0x10, so it is better to make sure the length is really 5 by reading byte at offset 0xc00407. To sum up following commands need to be issued and results be saved for future use:
Where CLA + P1 + P2 is concatenated address of memory area to read, INS=B0 is read command and Lc contains length of data to read.
Writing data
Alternatively, it is possible to write custom values to these buffers. This is especially encouraged for users who want to use the card not only for testing. Overwriting the values could be done with following:
Where user data is filled with some random data of length in Lc field.
Required values
Beside securing keys, it is required to set CM_LIFECYCLE value to 0x01 and make sure all fields related to keys and PIN have proper values. Here, my memory dump can be used as reference, since I initialized the card before dumping the memory.
Finishing
After setting all the fields to desired values, there are two more steps to do. First one is issuing PROTECT command. It looks as below:
CLA=00 INS=10 P1=00 P2=00
And finally, sending FUSE command with:
CLA=00 INS=00 P1=00 P2=00
Here again, remember, that this command cannot be undone!
Well done! Your card should now be pre-personalized and ready to use, even in production environment. At the end, one remark: probably FUSE command does not need to be issued at all. However, if it is not issued, the card is completely insecure and should not be used in production.
NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 18th February 2017.
Recently I was asked to configure internet browser on a thing called Vasco Translator Premium 7″. The device looks exactly like many of the low-end Android tablets from China. And it happens to be one. The problem is that is was locked to used the only allowed application which is the translator. It has some minor functions like camera and it seems that was a mistake of its authors. They used default Android app as Camera and Gallery applications (and forgot to lock send message button in the latter 🙂 ).
At first I have to highlight the fact, this is not the full unlock or root of the device, but the fact of ease of the process allows me to suppose that rooting the thing should not be too difficult, too. Our goal here is to open up the web browser. Because as it seems the software below the shitty overlay is ordinary Android with all the apps on its place. And how useful might be the tablet without internet browser?
Prerequisites
The tool we will use to do the trick is good ol’ WAP Push protocol. For some reason in newer Android devices this dinosaur has been revived and supported out of the box. The goal is to send such message to our locked device, and since the gallery app mentioned above allows to escape to Messaging app, read it on it. This is probably the hardest part of the process. And possibly may require to buy some additional hardware (if you have access to any service, you know is sending WAP Push, you can use it and skip this part).
And that hardware is a GSM modem. It is highly possible that you already have one that can be used. The thing we will need is the possibility to send SMS through AT commands. Many Android phones allows that, at least if they are rooted, probably LTE/3G modems can do that too (not checked that personally). Ok, since the procedure to get access to AT interface is completely different for any device you can get, I have to leave you alone with getting used to that. After some time, you will probably end up in minicom or some similar program and parameters like 115200/8N1 or 9600/8N1. In my case (Android with Qualcomm processor) it is /dev/smd11 and params are 115200/8N1. Now you could type AT to check if the device you found is really an AT modem (should respond with OK) and AT+SCA? to check your SMS center address. You should be able to recognize it or Google it to check if it really belongs to your mobile operator.
Crafting SMS
Now, since we have all the tools, we can start crafting SMS. I will omit many details here since just general description of PDU format would take whole article and complete one is more than 100 pages long. The only part you need to know about is destination address. This will be the phone number of your device. Trailer of the message is WAP Push payload, which to be described will need another 100 or so pages, so skip it. As a remark, there is some program called wbxml2xml/xml2wbxml that allows to read/write the message. In our case, we want to enforce the device to visit Google.com, so this will be the address of WAP bookmark.
Ok, so on the picture above, thing we are interested in are [dest_addr] and [dest_len]. The first encodes telephone number “+37201234567” (note lack of ‘+’ sign), the second its length (as number of digits, 0x0b == 11). The number of your device should be placed here and you could move on to next section.
Or you can try customizing the payload. The important thing here is marked as [WBXML] and can be crafted with program mentioned before. After changing this, adjustment of [ud_len] value to number of bytes in payload (those after the length) is required.
Sending SMS
Since we already have modem, we need to type AT command to initiate message sending. But before that, we need to ensure that we are in binary mode. Type AT+CMGF? and, if value is other than zero, AT+CMGF=0. Now start sending with:
AT+CMGS=55
Where 55 is length of payload in bytes, but without SMSC header (one byte at the beginning). Modem should respond with > prompt, where SMS could be typed.
And after that press CTRL-Z (^Z) in your terminal. This should send SUB (substitute) to modem. It is important not to use any characters in between, like spaces and ENTER. After about a second, you should see that sending was successful and no error was returned.
Receiving and opening
Now, if you have your translator turned on, you should hear that new message was received, but nothing appeared on screen. That is ok. The rest of the procedure is shown on video below:
Postscriptum
After another few minutes of playing with the device I found another method of opening the browser and it is way faster than what was described below. But the first one was much more entertaining for me and is showing one of the many places where serious bugs could be found – forgotten technologies, still being implemented, possibly used, but with lack of knowledge about details in general public.
You can see the other method on video below, and possibly it is the one you want to use.
NOTE: This post was imported from my previous blog – v3l0c1r4pt0r.tk. It was originally published on 8th February 2017.
Some time ago I was struggling with JCOP smart card. The one I received as it turned out was not pre-personalized, which means some interesting features (like setting encryption keys and PIN) was still unlocked. Because documentation and all the usual helpers (StackOverflow) were not very useful (well, ok, there was no publicly available documentation at all), I started very deep search on Google, which finished with full success. I was able to make dump of whole memory available during pre-personalization.
Since it is not something that could be found online, here you have screenshot of it, colored a bit with help of my hdcb program. Without documentation it might not be very useful, but in some emergency situation, maybe somebody will need it.
Small explanation: first address, I was able to read was 0xC000F0, first address with read error after configuration area was 0xC09600. I know that, despite of lack of privileges some data is placed there.
There are three configurations: cold start (0xc00123-0xc00145), warm start (0xc00146-0xc00168) and contactless (0xc00169-at least 0xc0016f). Description of coding of the individual fields is outside of the scope of this article. I hope, I will describe them in future.
Next time, I will try to describe the process of pre-personalization, that is making not pre-personalized card, easy to get from usual sources of cheap electronics, able to receive and run applets.
Update: Next part of this tutorial can be found under this link.