Printing pictures like its 1873 using Oki 3321 dot-matrix printer

Steinway hall 1873
As wikipedia says oldest halftone image printed in a newspaper back in 1873

Long, long time ago, before prices of inkjet and laser printers fell to levels allowing home users to own and use them, there was a primitive printing technology called dot-matrix. As any technology of the past, it is not competitive anymore. However it still has few advantages and one of them is reliability of these devices. Some time ago I found quite a cheap Oki 3321 printer that has 9 pin head and is capable of printing on A3 paper in portrait orientation. Usual mode of printing for these devices was simple text mode, where you just were writing your text in ASCII (or any weird coding popular in your country of origin) to its parallel port. Fortunately these printers usually had also graphic mode, where you could fully use capabilities of the device.

I already was experimenting some time with my device, so I already know it uses Mazovia variant (with zΕ‚ as single glyph) as its codepage. I was also able to guess how to switch into graphic mode, so in theory I was able to print images for some time. Unfortunately any CUPS drivers I used did not provide acceptable results, so all I could do was to write some support tool myself.

Meet png2lp

png2lp is a tool that (as its name suggests) converts PNG images to format understandable by line printers. I tried to write it in a way that allows further extensions, so in theory it is possible to write support for a printer that works completely different way than mine. There is only one limitation – PNG image that comes as input must be saved in indexed mode with only two colors in a table. So, you cannot support printer that can print more colors or shades than one (or you could but png2lp can’t give you input of sufficient quality).

Usage is quite straightforward – you supply filename as a parameter or give filename “-” and put data on standard in. On stdout, you get converted image. As program supports more than one output format, you have to specify desired format also. You can list available sinks by typing:

png2lp -l

Then you can choose your printer from the list and list available page formats:

png2lp -L oki3321

And finally you can convert the image by typing:

png2lp -p oki3321 -P a4-p 484px-Tux_mono.svg.png

Converting ordinary images to 1bit PNG

Tux logo
Tux logo (source: wikipedia.org)

As you might have noticed, there are not many 1bit PNG images these days, so what you might want to do is to convert some existing image to what png2lp expects. It is easiest with SVG images, as they are quite flexible and you won’t loose much of a quality during conversion. In case of Oki 3321 and probably also other Oki printers from these days, maximum resolution I was able to get for portrait A4 page is 484×760 dots. However in practice I can see the image printed is not exactly proportional. This is a thing that should not be fixed in png2lp itself, as this might break more precise images. So, we have to manage this problem during conversion to input PNG image.

Let’s take well known Tux logo as an example. It can be downloaded from Wikipedia as SVG, then we can use ImageMagick to do all the transformations at once:

convert Tux_Mono.svg -resize 90%x106% -resize 484x760 -monochrome 484px-Tux_mono.svg.png
484px Tux logo
Tux converted to 1bit PNG and stretched to match printer’s distortion

After that we can try to pipe result of png2lp example above to our line printer device, that is usually /dev/usb/lp0 or /dev/lp0 on Linux. One note here: png2lp does not produce form feed character to allow potential embedding its results into bigger page. You have to write it manually or press button on your printer for it to give the page back to you!

Results

Tux printed on Oki 3321
Printed Tux

As can be seen in the picture above, there is quite a huge margin on the bottom. There is not much that can be done with it, as if I try to write something there, page sensor disconnects and printer stops printing until new page is fed. In theory it is possible to override this behavior by pressing select button few times (every press allows printing one additional line).

Another thing we can see here is exceptional quality of the printout πŸ™‚ With this, we can do even less, as this page was printed using so called “high quality” mode.

Output format

Finally, few words about format of the input data printer have to receive. They are based on escape codes, so at first may seem similar to ANSI escape codes used in terminals for e.g. coloring or presenting window-like experience with help of ncurses. Despite that, I don’t think any of these codes were standardized somehow. At least some part of them come from IBM printers, so may be compatible with many devices of different vendors, as IBM was the one creating standards at that time.

For printing in graphical mode there are two codes, I am aware of, that allows to print virtually everything: \eK and \eJ. Their formats are as follows:

struct K {
  uint8_t esc;
  uint8_t K;
  uint16_t columns;
}

struct K k = {'\033', 'K', htole16(columns)};

struct J {
  uint8_t esc;
  uint8_t J;
  uint8_t offset;
}

struct J j = {'\033', 'J', 0x18}; // 0x18 - experimentally found magic

mhz14a – program for managing MH-Z14/MH-Z14A CO2 sensors via UART

MH-Z14A CO2 sensor

When I have seen CO2 sensor for the first time, it was quite expensive device. Well, if one want to buy consumer device these days, it still could cost a lot. However in the days of cheap Chinese electronics sellers on biggest auction platforms, for makers, situation is quite different now. MH-Z14 is the cheapest CO2 sensor I was able to find. I costs about $19 and comes in few variants: MH-Z14 and MH-Z14A. Also it can measure up to 1000 ppm, up to 2000 ppm or up to 5000 ppm. However the range does not matter in practice, as it is possible to switch between them using UART.

The device interfaces are quite flexible for such a cheap device, as beside mentioned UART port it provides PWM and analog output. However, I was not able to measure valid value using analog and my cheap multimeter. Maybe some more sophisticated equipment is required for that.

I have to make one note here: device I bought is labeled as MH-Z14A and its range is 0-5000 ppm. Other variants might have different features. For mine, there is no UART protocol documentation. Yet, protocol documented under name MH-Z14 works, so be careful.

mhz14a – UART protocol implementation

As UART protocol of the device is quite complex and what is more important a binary one, it is required to communicate with the device programatically. It is impractical to use just a terminal application. For the purpose I created mhz14a program. It wraps all the internals with nice getopt-based interface, so can be used from both interactive shells and scripts. What user has to know is what is his/her UART device path and choose desired command (most likely read – -r).

At first program have to be compiled. For now installation in a system is not possible, so compilation is going to leave binary in build/ directory. Project uses cmake, so to compile one have to execute:

mkdir -p build && cd build
cmake ..
make
src/mhz14a --help

To sum up, basic usage of the program is like below (white are the commands I type):

$ src/mhz14a -r -d /dev/ttyUSB0
410
$

Device documentation

Some sellers provide some basic guide of what is where themselves, so digging for a datasheet might be unnecessary. Fortunately, even if they don’t provide anything, Winsen, who is their manufacturer, provides everything needed. In this case, what is nothing new in the world of Chinese manufacturers, it is worth to see Chinese language variant instead of the English one, as it contain more information.

Final words

For the moment, I don’t have commands described only in Chinese implemented, so it is not possible to switch supported ranges and turn off automatic calibration. However this should not be issue in most cases, as what most users need is just to read sensor data.

I might do some work with the sensor in future, as my initial idea was to switch to analog output just after setting up UART. Unfortunately I have problems with analog pin readings. If I decide to develop the project again, I am definitely going to implement these hidden commands, as well as do some refactoring, as code sometimes looks really bad (thankfully, I was writing some unit tests, thanks to one of my previous projects – cmocka/cmake template, which I used here).

SADVE – tiny program for computing #define values

While tinkering with spy camera, I found one detail that is significantly slowing the process of reverse engineering and debugging the applications, installed on its embedded Linux platform – finding final values of preprocessor directives and sometimes also results of sizeof() operator.

As I am not aware of any existing solution for that problem (I guess there might be some included in one of the more sophisticated IDEs, however I use Vim for development) it is good reason to create one. By the way I used cmake template I published some days ago to bootstrap the project.

Usage

Ease of use was the main goal here, as it is obviously possible to create improvised solution by creating hello-world type of program, including required headers and printing the symbol we want to compute value of.

So, to be able to use SADVE, you just have to clone the repo and use standard cmake installation commands and you’re done:

mkdir -p build
cd build
cmake ..
make
sudo make install

Then you can call it like below:

sadve -d AF_INET sys/socket.h

And you should get 2 as an answer. That’s it. If instead you want to get size of some structure, you can type:

sadve -s sockaddr sys/socket.h

And you should get size of sockaddr structure. Obviously, you can see full usage with sadve --help.

Internals

Internally the program simply automates the process I described in the first paragraph of Usage – it applies what is desired by user to hello-world-like template and compiles. Therefore it might not be the best idea to make it a backend for web service available for general public, at least without a lot of isolation and input sanitization. However for private usage this should be enough. If you are interested in doing such task with cmake, I encourage you to dive into source code on Github.

To speed the process up, I had to store all cmake build files in ~/.cache with no interface for cleaning it up.

Using CMocka for unit testing C code

CMake logo

Writing unit tests along with the source code (or even before the code itself – see TDD) is currently very popular among programmers writing in languages like Java or C#. For C code, however it is a bit different. There are only a few frameworks enabling the possibility to write unit tests. One of them is quite special – it allows to mock functions. And its name is CMocka. Unfortunately there are not many resources that describes the process of setting up cmocka, especially together with cmake to allow programmers add new executables, tests and mocks without unnecessary overhead. But before showing how to do it, let’s go back to basics (if you already know them, you can skip next heading).

What is mocking?

Mocking is a mechanism that allows to substitute object, we do not want to test, with empty implementation, which we can further configure to do whatever we like, like simulate errors. Usually objects are mocked, because we import them from some external library and this is not purpose of unit testing to test these external dependencies. This is how they work in object-oriented languages like the ones mentioned at the beginning. In C, it is only a bit different in that, instead of mocking the object (which does not exist in C), we mock functions. Similarly to mocking object, this allows us to control behavior of external function and i.e. test reaction of our code to errors.

To give very short impression about the possibilities, it opens let’s say we want to test function that accepts connection to a socket. How could we test such a function without touching the accept() function? Probably we would need another program that performs connect(). In such a simple case it requires us to write second program to cover just one case. Then, what if we want to check reaction for failed connection establishment? Manpage tells us, ECONNABORTED is returned in that case. So, how to force our test program to break the connection before second side gets return from accept() function? Do you see how complex it gets?

But, what if we could link our test program (the one that calls function using accept()) to our own accept()? Then at first we could write our fake function so it pretends that real socket had been created and return some positive integer. In second case, we could just set errno to ECONNABORTED and return -1. That’s it!

This, and by the way, much more is possible with CMocka. Let’s see how to configure simplest possible project to use cmocka with cmake.

Hey cmake, do my unit tests!

For purpose of this tutorial, let’s say we have quite simple project. We just started it, so it’s perfect time to enable unit testing for TDD approach. It consists of one console (CLI) application split into two modules. First one – program is just main() function and another function it calls. Second one – module provide just one function that is also called by main(). For configuration, we have two extremely short CMakeLists.txt files:

CMakeLists.txt
cmake_minimum_required (VERSION 3.0) project (cmocka_template) add_subdirectory(src)
src/CMakeLists.txt
add_executable(program program.c module.c)

You can view the code itself on Github. With this configuration it is now possible to make the project with cmake. To enable testing, at first we have to append following script to main CMakeLists.txt:

27 list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake/Modules")
28 
29 # cmocka
30 option(ENABLE_TESTS "Perform unit tests after build" OFF)
31 if (ENABLE_TESTS)
32   find_package(CMocka CONFIG REQUIRED)
33   include(AddCMockaTest)
34   include(AddMockedTest)
35   add_subdirectory(test)
36   enable_testing()
37 endif(ENABLE_TESTS)

In line 27, we tell cmake to look for additional cmake scripts to include in cmake/Modules directory. This would be required by includes in lines 33 and 34.

Then, we define option to enable/disable tests. This will allow users not willing to run unit tests, build the project without having to satisfy dependency to CMocka in line 32.

In that line FindCmocka will be called and will setup few variables. Of our interest would be ${CMOCKA_LIBRARIES}. This points to CMocka library, we have to link with all our test programs.

Includes in lines 33 and 34, provides functions with the analogous names, first one is part of CMocka sources, the other define simple wrapper on the first one, so user do not have to type all the parameters that usually stays the same from test to test.

At last we include another build script from subdirectory and enable testing.

With help of the mentioned add_mocked_test wrapper in simplest case, where we do not use any functions external to the module, all we have to do is call add_mocked_test(module). However, if we do call some external functions, we have to call it like below and provide sources with these functions:

add_mocked_test(program SOURCES ${CMAKE_SOURCE_DIR}/src/module.c)

Alternatively, we can also join many sources into library and pass it like that:

add_mocked_test(module LINK_LIBRARIES mylib)

This finally becomes -lmylib in ld.

Simplest test

As we should have complete build script, we can now write simple test:

test/test_main.c
24 #include <stdarg.h> 25 #include <stddef.h> 26 #include <setjmp.h> 27 #include <cmocka.h> 28 29 #define main __real_main 30 #include "program.c" 31 #undef main 32 33 typedef struct {int a; int b; int expected;} vector_t; 34 35 const vector_t vectors[] = { 36 {0,1,0}, 37 {1,0,0}, 38 {1,1,1}, 39 {2,3,6}, 40 }; 41 42 static void test_internal(void **state) 43 { 44 int actual; 45 int i; 46 47 for (i = 0; i < sizeof(vectors)/sizeof(vector_t); i++) 48 { 49 /* get i-th inputs and expected values as vector */ 50 const vector_t *vector = &vectors[i]; 51 52 /* call function under test */ 53 actual = internal(vector->a, vector->b); 54 55 /* assert result */ 56 assert_int_equal(vector->expected, actual); 57 } 58 } 59 60 int main() 61 { 62 const struct CMUnitTest tests[] = { 63 cmocka_unit_test(test_internal), 64 }; 65 66 return cmocka_run_group_tests(tests, NULL, NULL); 67 }

At first we have to include some headers. Then in lines 29 and 31, preprocessor hack is done. This is because we already have a main() function used to define test suite. So to not have it redeclared, we temporarily change its name to __real_main, so this is how our program.c/main() will be called, when included in line 30. For tests with modules not containing main() function this is superfluous.

In lines 33-40, we define sets of data to feed into our function with expected results. This is useful if we have many such vectors. For single inputs/outputs pair this is unnecessary. In line 50 one vector is extracted from that array.

Then in line 53 function under test is called and in line 56 its value is asserted with assert_int_equal.

Finally in main() function we have to define list of tests in this suite and call cmocka_run_group_tests to do rest of the job for us. Done.

add_mocked_test internals

For better understanding of the process, let’s have a short look into add_mocked_test() function. It’s source is as simple as:

cmake/Modules/AddMockedTest.cmake
32 function(add_mocked_test name) 33 # parse arguments passed to the function 34 set(options ) 35 set(oneValueArgs ) 36 set(multiValueArgs SOURCES COMPILE_OPTIONS LINK_LIBRARIES LINK_OPTIONS) 37 cmake_parse_arguments(ADD_MOCKED_TEST "${options}" "${oneValueArgs}" 38 "${multiValueArgs}" ${ARGN} ) 39 40 # define test 41 add_cmocka_test(test_${name} 42 SOURCES test_${name}.c ${ADD_MOCKED_TEST_SOURCES} 43 COMPILE_OPTIONS ${DEFAULT_C_COMPILE_FLAGS} 44 ${ADD_MOCKED_TEST_COMPILE_OPTIONS} 45 LINK_LIBRARIES ${CMOCKA_LIBRARIES} 46 ${ADD_MOCKED_TEST_LINK_LIBRARIES} 47 LINK_OPTIONS ${ADD_MOCKED_TEST_LINK_OPTIONS}) 48 49 # allow using includes from src/ directory 50 target_include_directories(test_${name} PRIVATE ${CMAKE_SOURCE_DIR}/src) 51 endfunction(add_mocked_test)

At the beginning, arguments are parsed, which is outside of the scope of this article (those interested could read the official documentation). What is important in this part is that SOURCES from the function call becomes ADD_MOCKED_TEST_SOURCES, COMPILE_OPTIONS are ADD_MOCKED_TEST_COMPILE_OPTIONS and so on.

Then in line 41 add_cmocka_test function is called. This functions does some of the job for us. For example, we do not have to worry about defining executable, linking libraries to it and making it a test called by CTest.

Then in line 42 source file list is passed to it, so it can link them into final executable. We can also pass our own compiler and linker flags, so they are routed to it in lines 43-44 and 47.

Last thing it does is to link this test executable with all required libraries and the only requirements for CMocka to work is to pass its shared library using CMOCKA_LIBRARIES variable, which is available thanks to finding CMocka package in CMakeLists.txt.

Beside that, we make C headers in src/ directory visible to our test program in line 50. That’s it.

Enabling mocks

Killer feature of CMocka however is its API for creating mocked functions. To use it, on CMake side all we have to do, beside what we already did is add flags for linker (to be precise -Wl,--wrap=function_name for every function to be mocked). As can be seen in add_mocked_test source, we could use LINK_OPTIONS argument for that purpose. However it is timesaving to have this integrated into the function’s interface. All we have to do is add a loop for creating argument list:

cmake/Modules/AddMockedTest.cmake
40 # create link flags for mocks 41 set(link_flags "") 42 foreach (mock ${ADD_MOCKED_TEST_MOCKS}) 43 set(link_flags "${link_flags} -Wl,--wrap=${mock}") 44 endforeach(mock)

Then add new argument and modify LINK_OPTIONS of add_cmocka_test to pass that list:

53                   LINK_OPTIONS ${link_flags} ${ADD_MOCKED_TEST_LINK_OPTIONS})

Now, it should work. Then on the source side we can create new test for existing test_program test suite:

79 static void test_main(void **state)
80 {
81   int expected = 0;
82   int actual;
83 
84   /* expect parameters to printf call */
85   expect_string(__wrap_printf, format, "%d\n");
86   expect_value(__wrap_printf, param1, 60);
87 
88   /* printf should return 3 */
89   will_return(__wrap_printf, 3);
90 
91   /* call __real_main as this is main() from program.c */
92   actual = __real_main(0, NULL);
93 
94   /* assert that main return success */
95   assert_int_equal(expected, actual);
96 }

And write mocked object:

43 int __wrap_printf (const char *format, ...)
44 {
45   int param1;
46 
47   /* extract result from vargs ('printf("%d\n", result)') */
48   va_list args;
49   va_start(args, format);
50   param1 = va_arg(args, int);
51   va_end(args);
52 
53   /* ensure that parameters match expecteds in expect_*() calls  */
54   check_expected_ptr(format);
55   check_expected(param1);
56 
57   /* get mocked return value from will_return() call */
58   return mock();
59 }

Now let’s take a look at what is happening here. At first in function test_main, in lines 85-86 we are telling CMocka that printf function is expected to be called with parameter format of a certain content and param1 should have value 60. param1 here is first variadic argument for printf function and its name is completely arbitrary. The key to working mock is to use this same name inside of mocked function.

In line 89, we say mocked printf to return 3 (this is number of bytes written to console and will then be ignored by code in __real_main, but for completeness it is set to proper value here).

Finally, we call the function under test and assert its result, the same way as in the example without mocks above.

On the other side is the mocked printf implementation. In languages like Java it is common to have this mock automatically generated based on the rules provided by user. In C with CMocka it is not so easy, we have to write the mock ourselves. However it is not very hard.

As we chosen variadic function for our mock and we want to check if these variadic arguments are as expected, we have to extract them at first, which is done in lines 48-51. As a result we have param1 variable (notice that it is local variable). Remember that this must match exactly what we declared in test body using one of expect_* functions. This name is how function-parameter pair is extracted from internal dictionaries.

Then in lines 54-55 arguments are checked for having expected contents. Note that there is no difference in how we treat positional and variadic parameters in this step.

Finally in line 58, we extract value that we want to be returned from printf (declared in line 89). This may seem superfluous in such a simple case, however if we want use this mocked printf in more than one test, it is really useful feature. Have also in mind that linker does not give us any interface to have two or more mocks for one function. So this way we have chance to write one universal mock.

Final word

This tutorial was made during my work on template for CMocka+CMake projects. So now, as you understand how it is done, you can just go to my Github and clone the template to make it part of your project. The template is licensed under MIT license, so I don’t care about what you do with it, as long as you leave original copyright.

Setting up new v3 Hidden Service with ultimate security: Part 4: Installing client certificates to Firefox for Android

Firefox logo

This post is a part of Tor v3 tutorial. Other parts are:

  1. Hidden Service setup
  2. PKI and TLS
  3. Client Authentication
  4. Installing client certificates to Firefox for Android

As we now have Hidden Service, requiring clients to authenticate themselves with proper certificate, it would be great to be able to use Android device to access the service. As I shown before, on desktop Firefox it was quite trivial. Unfortunately, things are different on Android. Mobile Firefox does not have any interface for adding any certificates. Furthermore, unlike Chrome, it does not use default Android certificate vault, providing it own instead. On the other hand, under the hood it is more or less the same Firefox, so the support itself is present. Therefore, we need to hack into Firefox internal databases and add the certificate there. In this part, I will show, how to do that.

Caution: similarly to desktop browser, you should not add any random certificates to your main browser. It is even worse idea to do the same with Orfox, as it might allow attackers to reveal your identity. Newer Androids have ability to create user accounts, furthermore Firefox has profiles features, just like on desktop, but harder to use. If you want to do, what is described here, separating this configuration from any other is first thing to do.

Installing CA certificate

Before we do that with user certificate, let’s start with CA. It is way easier, as Firefox has convenient feature allowing to install certificates by browsing them. All we need to provide is a valid MIME type – application/x-x509-ca-cert. So, all we need is some webserver, which we will configure to treat files with extension .crt to be treated as mentioned type. Just after opening certificate file, Firefox should ask if you are sure about adding the certificate and allow you to choose for what purpose it will be used. It also allows to view the certificate to make sure, it is the one we intended to add.

Firefox - certificate details
At first, check the certificate
Firefox - PEM download
Then use it only for website identification

In theory there is very similar MIME for user certs – application/x-x509-user-cert, but for some reason, what Firefox says after opening this type of file is:

“Couldn’t install because the certificate file couldn’t be read”

And the same effect is, no matter if the file is password protected or not.

Installing client certificate

  1. Go to /data/data/org.mozilla.firefox/files/mozilla on Android device (root required)
  2. Locate default Firefox profile. If there is only one directory in format [bloat].profile, this is it. If not, file profiles.ini should contain only one profile with Default=1. This is what we are looking for
  3. Download files cert9.db and key4.db to Linux machine
  4. Use pk12util to insert certificate into database:
$ pk12util -i [filename].p12 -d.
Enter password for PKCS12 file:
pk12util: no nickname for cert in PKCS12 file.
pk12util: using nickname: [email] - r4pt0r Test Systems
pk12util: PKCS12 IMPORT SUCCESSFUL
  1. Upload files back to Android. Make sure Firefox is not running
  2. Test it by opening your hidden service with Firefox. You should see messages similar to these:
Firefox Mobile - User Identification Request
Request for identification
Firefox - Certificate details
Certificate details
cgit via Tor hidden service
Finally, working cgit via tor!

Setting up new v3 Hidden Service with ultimate security: Part 3: Client Authentication

Secure Card icon

This post is a part of Tor v3 tutorial. Other parts are:

  1. Hidden Service setup
  2. PKI and TLS
  3. Client Authentication
  4. Installing client certificates to Firefox for Android

As we now have working Public Key Infrastructure, we are ready to use it for more than encrypting traffic (which is already encrypted by Tor). We can very easily turn on client verification on our server. This will prevent anybody not having valid certificate issued by us from visiting our hidden webpage – just in case hiding domain name in hidden services version 3 leaks the name somehow (which should not happen anymore in v3). In this part we will issue client certificate (the procedure is almost identical to server certificate), then configure httpd to require client identification and finally configure Firefox to try sending the certificate. Let’s go!

Issuing user certificate

In my case tmp directory emulated client machine and ca is my Cerificate Authority, which issues certificates. We start by creating request on client side, then sign it on CA side.

$ mkdir tmp
$ cd tmp
$ openssl genrsa -out v3l0c1r4pt0r@gmail.com.key.pem 4096
Generating RSA private key, 4096 bit long modulus
........++
..............................................++
e is 65537 (0x010001)
$ openssl req -config ../ca/intermediate/openssl.cnf -key v3l0c1r4pt0r@gmail.com.key.pem -new -sha256 -out v3l0c1r4pt0r@gmail.com.csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:PL
State or Province Name [England]:lodzkie
Locality Name []:
Organization Name [Alice Ltd]:r4pt0r Test Systems
Organizational Unit Name []:
Common Name []:v3l0c1r4pt0r@gmail.com
Email Address []:v3l0c1r4pt0r@gmail.com
$ chmod 400 v3l0c1r4pt0r@gmail.com.*.pem
$ cp v3l0c1r4pt0r@gmail.com.csr.pem ../ca/intermediate/csr/
$ cd ../ca
$ openssl ca -config intermediate/openssl.cnf -extensions usr_cert -days 375 \
> -notext -md sha256 -in intermediate/csr/v3l0c1r4pt0r@gmail.com.csr.pem \
> -out intermediate/certs/v3l0c1r4pt0r@gmail.com.cert.pem
Using configuration from intermediate/openssl.cnf
Enter pass phrase for /home/r4pt0r/Research/cubie/newtor/ca/intermediate/private/intermediate.key.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 4097 (0x1001)
        Validity
            Not Before: Feb 27 17:14:40 2018 GMT
            Not After : Mar  9 17:14:40 2019 GMT
        Subject:
            countryName               = PL
            stateOrProvinceName       = lodzkie
            organizationName          = r4pt0r Test Systems
            commonName                = v3l0c1r4pt0r@gmail.com
            emailAddress              = v3l0c1r4pt0r@gmail.com
        X509v3 extensions:
            X509v3 Basic Constraints:
                CA:FALSE
            Netscape Cert Type:
                SSL Client, S/MIME
            Netscape Comment:
                OpenSSL Generated Client Certificate
            X509v3 Subject Key Identifier:
                ED:24:E6:FF:1D:9B:61:AC:29:66:39:59:FB:5D:77:25:F7:A3:55:47
            X509v3 Authority Key Identifier:
                keyid:3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70

            X509v3 Key Usage: critical
                Digital Signature, Non Repudiation, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Client Authentication, E-mail Protection
Certificate is to be certified until Mar  9 17:14:40 2019 GMT (375 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
$ cd ../tmp
$ cp ../ca/intermediate/certs/v3l0c1r4pt0r@gmail.com.cert.pem ./
$ openssl pkcs12 -export -inkey v3l0c1r4pt0r@gmail.com.key.pem -in v3l0c1r4pt0r@gmail.com.cert.pem -out v3l0c1r4pt0r@gmail.com.p12
Enter Export Password:
Verifying - Enter Export Password:

Last step was packaging certificate and key into PKCS#12 container. That is for securing key (we can encrypt it with password), and is a form required by Firefox. After creation of .p12 (and verifying it is fine), we can (and SHOULD) delete source files, as they are not protected in any way.

Configuring httpd to require user certificate

To enforce client verification, following lines must be added to virtual host configuration, in our case it might go just after SSL certificate file paths.

    SSLVerifyClient require
    SSLVerifyDepth 2

We have to reload httpd for changes to take effect.

Installing certificate to Firefox

At last, to start using newly generated certificate, we should install it to Firefox. The procedure is similar to the one with CA certificate. We need to open Certificate Manager window. Then, instead of going to Authorities, we go to Your Certificates. Then we click on Import and select .p12 file.

Firefox Certificate Manager
Certificate Manager / Your Certificates

If the file has password, Firefox will ask for it and after successfully reading the content. If everything went well, you should see your certificate on the list. Now we can try connecting to our hidden service. We should see the window like this:

Firefox - User Identification Request
Server asks for client’s identity

Finally, after confirmation, you should see your hidden service content. Congrats!

Setting up new v3 Hidden Service with ultimate security: Part 2: PKI and TLS

KGPG icon

This post is a part of Tor v3 tutorial. Other parts are:

  1. Hidden Service setup
  2. PKI and TLS
  3. Client Authentication
  4. Installing client certificates to Firefox for Android

After setting up working Tor hidden service, the next step to ultimate security is having properly implemented Public Key Infrastructure (PKI). For this step, there are a lot of tutorials already existing and there is not much that needs to be added to them. Personally, I was using tutorial available here for the second time now and I find it very well-written. Because I am going to follow this tutorial, I will just post commands that have to be executed.

Before starting, I have to add one important remark. To make our PKI really secure one, it is crucial to have root CA air-gapped, that is device, on which it will be generated should be disconnected permanently from the internet. Good candidate for such a device might be some old laptop or Raspberry Pi Zero, as it lacks Ethernet port and anything reasonable to connect to internet. It is also important to store generated certificate in a safe place and secure it with strong non-dictionary password, which will be saved only in our mind.

If the requirements are fulfilled, we can start the setup. Below are commands to type as well as output from them, for easier determination of whether the commands were successful or not.

Preparations

At first, we need to create following directory structure:

ca
β”œβ”€β”€ [drwxr-xr-x]  certs
β”œβ”€β”€ [drwxr-xr-x]  crl
β”œβ”€β”€ [-rw-r--r--]  index.txt
β”œβ”€β”€ [drwxr-xr-x]  intermediate
β”‚Β Β  β”œβ”€β”€ [drwxr-xr-x]  certs
β”‚Β Β  β”œβ”€β”€ [drwxr-xr-x]  crl
β”‚Β Β  β”œβ”€β”€ [drwxr-xr-x]  csr
β”‚Β Β  β”œβ”€β”€ [-rw-r--r--]  index.txt
β”‚Β Β  β”œβ”€β”€ [drwxr-xr-x]  newcerts
β”‚Β Β  β”œβ”€β”€ [drwx------]  private
β”‚Β Β  └── [-rw-r--r--]  serial
β”œβ”€β”€ [drwxr-xr-x]  newcerts
β”œβ”€β”€ [drwx------]  private
└── [-rw-r--r--]  serial

And file content is (enclosed between pipe symbols: |):

./index.txt: ||
./intermediate/index.txt: ||
./intermediate/serial: |1000
|
./serial: |1000
|

Then, we need to save this file into root/openssl.cnf and this file into root/intermediate/openssl.cnf. Inside them, the only thing that have to be changed is dir property in CA_default section. Use absolute path to your directory.

Root CA

Note: when giving values for certain fields, better give some country, state (I have just checked it’s necessary), ON, most importantly, Common Name and e-mail. Just in case some program will check if they exists.

$ openssl genrsa -aes256 -out private/ca.key.pem 8192
Generating RSA private key, 8192 bit long modulus
.................++
....++
e is 65537 (0x010001)
Enter pass phrase for private/ca.key.pem:
Verifying - Enter pass phrase for private/ca.key.pem:
$ chmod 400 private/ca.key.pem
$ openssl req -config openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 \
> -sha256 -extensions v3_ca -out certs/ca.cert.pem
Enter pass phrase for private/ca.key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:PL
State or Province Name [England]:lodzkie
Locality Name []:
Organization Name [Alice Ltd]:r4pt0r Test Systems
Organizational Unit Name []:
Common Name []:r4pt0r Root CA
Email Address []:admin@example.com
$ chmod 444 certs/ca.cert.pem
$ openssl x509 -noout -text -in certs/ca.cert.pem
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            9a:16:72:e8:ac:81:cd:be
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Root CA, emailAddress = admin@example.com
        Validity
            Not Before: Feb 20 17:22:27 2018 GMT
            Not After : Feb 15 17:22:27 2038 GMT
        Subject: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Root CA, emailAddress = admin@example.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (8192 bit)
                Modulus:
                    00:dd:8c:8f:5d:be:f4:0f:63:91:9c:73:bf:a8:17:
<quite a lot of data>
                    6d:c1:3f:5c:05
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Key Identifier:
                29:53:8A:D2:ED:CF:35:C2:BB:A8:12:06:01:74:99:A3:B8:E5:DC:FE
            X509v3 Authority Key Identifier:
                keyid:29:53:8A:D2:ED:CF:35:C2:BB:A8:12:06:01:74:99:A3:B8:E5:DC:FE

            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Key Usage: critical
                Digital Signature, Certificate Sign, CRL Sign
    Signature Algorithm: sha256WithRSAEncryption
         a9:6d:9e:d4:bf:1b:55:d8:f0:b5:e9:9d:56:e8:58:04:d6:c3:
<quite a lot of data>
         89:50:26:4f:3e:93:95:06:c7:38:08:c7:16:0e:d2:a2

Intermediate CA

$ openssl genrsa -aes256 -out intermediate/private/intermediate.key.pem 8192
Generating RSA private key, 8192 bit long modulus
.++
........................................................................................................................................................................................................................................................................................++
e is 65537 (0x010001)
Enter pass phrase for intermediate/private/intermediate.key.pem:
Verifying - Enter pass phrase for intermediate/private/intermediate.key.pem:
$ chmod 400 intermediate/private/intermediate.key.pem
$ openssl req -config intermediate/openssl.cnf -new -sha256 \
> -key intermediate/private/intermediate.key.pem -out intermediate/csr/intermediate.csr.pem
Enter pass phrase for intermediate/private/intermediate.key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:PL
State or Province Name [England]:lodzkie
Locality Name []:
Organization Name [Alice Ltd]:r4pt0r Test Systems
Organizational Unit Name []:
Common Name []:r4pt0r Intermediate CA
Email Address []:admin@example.com
$ openssl ca -config openssl.cnf -extensions v3_intermediate_ca -days 3650 \
> -notext -md sha256 -in intermediate/csr/intermediate.csr.pem -out intermediate/certs/intermediate.cert.pem
Using configuration from openssl.cnf
Enter pass phrase for ca/private/ca.key.pem:
Can't open ca/index.txt.attr for reading, No such file or directory
140341269315520:error:02001002:system library:fopen:No such file or directory:crypto/bio/bss_file.c:74:fopen('ca/index.txt.attr','r')
140341269315520:error:2006D080:BIO routines:BIO_new_file:no such file:crypto/bio/bss_file.c:81:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 4096 (0x1000)
        Validity
            Not Before: Feb 20 17:35:09 2018 GMT
            Not After : Feb 18 17:35:09 2028 GMT
        Subject:
            countryName               = PL
            stateOrProvinceName       = lodzkie
            organizationName          = r4pt0r Test Systems
            commonName                = r4pt0r Intermediate CA
            emailAddress              = admin@example.com
        X509v3 extensions:
            X509v3 Subject Key Identifier:
                3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70
            X509v3 Authority Key Identifier:
                keyid:29:53:8A:D2:ED:CF:35:C2:BB:A8:12:06:01:74:99:A3:B8:E5:DC:FE

            X509v3 Basic Constraints: critical
                CA:TRUE, pathlen:0
            X509v3 Key Usage: critical
                Digital Signature, Certificate Sign, CRL Sign
Certificate is to be certified until Feb 18 17:35:09 2028 GMT (3650 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
$ openssl x509 -noout -text -in intermediate/certs/intermediate.cert.pem
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 4096 (0x1000)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Root CA, emailAddress = admin@example.com
        Validity
            Not Before: Feb 20 17:35:09 2018 GMT
            Not After : Feb 18 17:35:09 2028 GMT
        Subject: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Intermediate CA, emailAddress = admin@example.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (8192 bit)
                Modulus:
                    00:d4:c9:03:36:4a:dd:3d:ee:ca:bd:c1:d8:fe:51:
<quite a lot of data>
                    5a:ca:74:74:c8:a2:b2:69:0a:0c:c7:f9:d6:8a:58:
                    41:45:73:fc:2b
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Key Identifier:
                3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70
            X509v3 Authority Key Identifier:
                keyid:29:53:8A:D2:ED:CF:35:C2:BB:A8:12:06:01:74:99:A3:B8:E5:DC:FE

            X509v3 Basic Constraints: critical
                CA:TRUE, pathlen:0
            X509v3 Key Usage: critical
                Digital Signature, Certificate Sign, CRL Sign
    Signature Algorithm: sha256WithRSAEncryption
         15:04:2f:85:89:f6:77:82:c4:60:78:f0:4f:ac:39:ad:15:14:
<quite a lot of data>
         7c:71:95:db:16:02:de:01:70:fe:8f:48:94:92:11:1b
$ openssl verify -CAfile certs/ca.cert.pem intermediate/certs/intermediate.cert.pem
intermediate/certs/intermediate.cert.pem: OK
$ cat intermediate/certs/intermediate.cert.pem certs/ca.cert.pem > intermediate/certs/ca-chain.cert.pem
$ chmod 444 intermediate/certs/ca-chain.cert.pem

Server certificate

In the following parts, wherever [domain] appears, it should be changed to hostname of our hidden service.

At first, we need to generate certificate request (CSR) on our server:

$ openssl genrsa -out [domain].onion.key.pem 4096
Generating RSA private key, 4096 bit long modulus
.................++
..............................................................................++
e is 65537 (0x010001)
$ chmod 400 [domain].onion.key.pem
$ openssl req -config ca/intermediate/openssl.cnf \
> -key [domain].onion.key.pem -new -sha256 -out [domain].onion.csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:PL
State or Province Name [England]:lodzkie
Locality Name []:
Organization Name [Alice Ltd]:r4pt0r Test Systems
Organizational Unit Name []:
Common Name []:[domain].onion
Email Address []:admin@[domain].onion

Then, we will sign the request with intermediate CA private key, thus issuing the certificate. But first of all, we need to receive the CSR from the server, to intermediate/csr/ directory.

$ openssl ca -config intermediate/openssl.cnf -extensions server_cert -days 375 \
> -notext -md sha256 -in intermediate/csr/[domain].onion.csr.pem -out intermediate/certs/[domain].onion.cert.pem
Using configuration from intermediate/openssl.cnf
Enter pass phrase for ca/intermediate/private/intermediate.key.pem:
Can't open ca/intermediate/index.txt.attr for reading, No such file or directory
139810167087040:error:02001002:system library:fopen:No such file or directory:crypto/bio/bss_file.c:74:fopen('ca/intermediate/index.txt.attr','r')
139810167087040:error:2006D080:BIO routines:BIO_new_file:no such file:crypto/bio/bss_file.c:81:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 4096 (0x1000)
        Validity
            Not Before: Feb 20 17:52:13 2018 GMT
            Not After : Mar  2 17:52:13 2019 GMT
        Subject:
            countryName               = PL
            stateOrProvinceName       = lodzkie
            organizationName          = r4pt0r Test Systems
            commonName                = [domain].onion
            emailAddress              = admin@[domain].onion
        X509v3 extensions:
            X509v3 Basic Constraints:
                CA:FALSE
            Netscape Cert Type:
                SSL Server
            Netscape Comment:
                OpenSSL Generated Server Certificate
            X509v3 Subject Key Identifier:
                DD:6E:E8:78:91:B9:F7:F4:0A:06:3F:D2:38:6D:11:4E:3C:D3:BC:E0
            X509v3 Authority Key Identifier:
                keyid:3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70
                DirName:/C=PL/ST=lodzkie/O=r4pt0r Test Systems/CN=r4pt0r Root CA/emailAddress=admin@example.com
                serial:10:00

            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
Certificate is to be certified until Mar  2 17:52:13 2019 GMT (375 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
$ openssl x509 -noout -text -in intermediate/certs/[domain].onion.cert.pem
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 4096 (0x1000)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = r4pt0r Intermediate CA, emailAddress = admin@example.com
        Validity
            Not Before: Feb 20 17:52:13 2018 GMT
            Not After : Mar  2 17:52:13 2019 GMT
        Subject: C = PL, ST = lodzkie, O = r4pt0r Test Systems, CN = [domain].onion, emailAddress = admin@[domain].onion
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (4096 bit)
                Modulus:
                    00:c5:d3:e2:a0:97:b8:4d:67:22:94:c9:be:17:e3:
<quite a lof of data>
                    49:76:cf
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Basic Constraints:
                CA:FALSE
            Netscape Cert Type:
                SSL Server
            Netscape Comment:
                OpenSSL Generated Server Certificate
            X509v3 Subject Key Identifier:
                DD:6E:E8:78:91:B9:F7:F4:0A:06:3F:D2:38:6D:11:4E:3C:D3:BC:E0
            X509v3 Authority Key Identifier:
                keyid:3D:AC:8E:21:79:5A:AD:7B:7C:92:92:65:B7:19:D0:E8:00:0E:50:70
                DirName:/C=PL/ST=lodzkie/O=r4pt0r Test Systems/CN=r4pt0r Root CA/emailAddress=admin@example.com
                serial:10:00

            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
    Signature Algorithm: sha256WithRSAEncryption
         b0:92:d9:d5:3b:31:38:f6:b8:51:1f:41:e9:f7:d8:e6:33:67:
<quite a lot of data>
         ee:c4:eb:19:86:69:00:26:8d:04:7b:97:0b:8f:f5:76
$ openssl verify -CAfile intermediate/certs/ca-chain.cert.pem intermediate/certs/[domain].onion.cert.pem
intermediate/certs/[domain].onion.cert.pem: OK

httpd configuration

Finally, we can use generated files to set up HTTPS encryption on webserver. For this, I am using httpd as it is the most common webserver in use. We need following files:

  1. [domain].onion.key.pem – this is private key, that will be used to set up TLS session
  2. [domain].onion.cert.pem – this is certificate that will prove our identity, so web browser will not display any warnings as long as we will have CA certificate installed
  3. ca-chain.cert.pem – this is chain of certificates we created together with intermediate CA, that consists of both CAs – root and intermediate

Below is httpd configuration file, after enabling TLS:

Listen 666

<VirtualHost *:666>
    ServerAdmin admin@re-ws.pl
    DocumentRoot "/home/r4pt0r/tor/hs/public_html"
    ServerName 192.168.253.4
    ErrorLog "[path]/tor/hs/error_log"
    CustomLog "[path]/tor/hs/access_log" common
    ScriptAlias /cgit/ "/usr/lib/cgit/cgit.cgi/"
    Alias /cgit-css "/usr/share/webapps/cgit/"
    SSLEngine on
    SSLCertificateFile "[path]/tor/hs/tls/[domain].onion.cert.pem"
    SSLCertificateKeyFile "[path]/tor/hs/tls/[domain].onion.key.pem"
    SSLCACertificateFile "[path]/tor/hs/tls/ca-chain.cert.pem"
</VirtualHost>

As can be seen above, all necessary files had been moved to tls directory of our hidden service main directory.

Afterwards, one slight change is needed in torrc file:

HiddenServicePort 443 127.0.0.1:666

From now on, we need to use https://[domain].onion to visit our site, as it is now TLS-encrypted and using port 443, which is default for HTTPS. For convenience, we can set up another httpd vhost on different port, that will redirect all HTTP traffic through HTTPS and link it to port 80, so remembering about https in address will not be necessary. But, it is only optional, so I will leave it as an exercise to the reader.

Firefox

From this point it is useful to have Firefox that is not constantly reminding about insecure connection. To prevent this, we should install CA certificate into Firefox. One remark here: as we are going to hack Firefox to trust our certificate, now our whole browsing through that instance of Firefox relies on our CAs private key. So, it is best to not use the same instance for anything else unless you are really sure, the private keys for both root and intermediate are perfectly secure.

To install the certificate, follow the screenshots below:

Firefox privacy preferences
On preferences page, got to security and scroll all the way down to View Certificates button

Firefox CM - Authorities

Firefox - Downloading Certificate
Confirm that this certificate will be able to identify websites
Firefox - page secure
Finally, we are secure and no exclamation mark appears!

Setting up new v3 Hidden Service with ultimate security: Part 1: Hidden Service setup

This post is a part of Tor v3 tutorial. Other parts are:

  1. Hidden Service setup
  2. PKI and TLS
  3. Client Authentication
  4. Installing client certificates to Firefox for Android

As a student I was lucky to have unlimited private Git repositories on Github, since they introduced that to their first paid plan. Unfortunately, I don’t have access to educational e-mail anymore, so I won’t be able to renew the service. This leads to a need to have that feature migrated to somewhere else. Some time ago, I installed cgit and gitolite on my single board computer (SBC). But, because of Github, there was no need to use that. Now it seems like a good replacement to Github’s Developer plan.

Few weeks ago, there was interesting event – Tor Project introduced new version of their Hidden Services – v3, which changes length of hidden service address in .onion domain and disables “feature” enabling some nodes in the network to index all existing service addresses. This seems like a good moment to give it a try and check, how fast (or rather how slow) will be the solution providing git through Tor on few-year-old SBC. By the way, I will show, how to configure things with maximum security in mind.

Disclaimer: I am not a person with deep knowledge of inner workings of Tor network, so I strongly encourage you to read thing or two, about how to use it safely. This article might contain errors that might reveal your identity, especially when used together with not-self-owned hidden services.

Prerequisites

Let’s start with summary of what we will need to make Tor v3 work:

  • tor in version 0.3.2.9 or higher
  • alternatively Tor Browser 7.5 or higher
  • for Android: Orbot and Orfox (at the moment of writing this, there is no support in current version of Orbot, so custom compilation is required – I am using Termux to provide tor binary)
  • httpd or any other HTTP server, able to provide service with only one vhost on separate TCP port

Because of the way, I am planning to configure hidden service in future, it might be a good idea to set up separate Tor browser at this moment, dedicated to this service, if it is going to be production configuration. If this is just an experiment, this advice could safely be ignored. However it is good to know, how to undo any modifications to the browser that will be done in the next parts.

httpd

What we need to do is to listen on localhost, on some random TCP port. Then we will set up httpd to provide only one virtual host on this custom port. It would be perfect to disable any other vhosts as our hidden service will work also as non-hidden service for local users, so if other service is buggy and allows to connect to other local services (see e.g. DNS rebinding), at least address of our hidden service will be compromised.

I have following configuration:

Listen 666

<VirtualHost *:666>
    ServerAdmin [email]@[domain]
    DocumentRoot "[path]/public_html"
    ServerName [domain].onion
    ErrorLog "[path]/error_log"
    CustomLog "[path]/access_log" common
</VirtualHost>

<Directory "[path]/public_html">
    DirectoryIndex index.html index.php index.txt
    AllowOverride All
    Options FollowSymlinks
    Require all granted
</Directory>

Furthermore, httpd must be able to traverse to public_html directory, so every directory from public_html up to root must have execute privilege for http user and directory itself as well as its content must be available (or better owned) by http.

After that and after starting httpd, it should be possible to visit http://localhost:666 via web browser and see content of public_html directory. If this is true, we can move on to tor configuration.

tor

SocksPort auto

HiddenServiceDir /etc/tor/hsv3
HiddenServiceVersion 3
HiddenServicePort 80 127.0.0.1:666

SafeLogging 0
Log notice stdout
Log notice file /etc/tor/hsv3/hs.log
Log info file /etc/tor/hsv3/hsinfo.log

Now, on the first startup of tor,Β  it should create keys for our new hidden service. We can look into /etc/tor/hsv3/hostname to see the .onion address. It is good idea to set key files and hostname file as readable as only user running tor service. In case of service started by systemd, this will probably be tor by default.

After starting tor service (systemctl start tor in case of systemd), we can check if everything works properly by visiting our hidden service with tor-enabled browser (using tor 0.3.2.9 or higher). That’s it.

Firefox for Android

At the time of writing this article there is still no upgrade for Orbot app, providing GUI interface for tor. Because of that, it might be required to use ordinary Firefox to use tor as a proxy, which is generally bad idea for connecting to any hidden services, because of privacy and anonymity. Fortunately, we can live with revealing our identity to ourselves πŸ™‚ so we can do it only this single time.

What we need to change are following configuration options, available under about:config page:

  • network.proxy.socks to localhost
  • network.proxy.socks_port to 9050
  • network.proxy.socks_remote_dns to true
  • network.proxy.socks_version to 5, if any other (should be default)
  • network.proxy.type to 1 (0 means no proxy, 5 is system proxy)

Conclusion

Now we are ready to use our hidden service, from both desktop and mobile. Still, we use only HTTP protocol, which is not a big problem, as tor already provides encryption. Neverheless our next goal would be to configure HTTPS. And then we will configure client authentication for ultimate security of our hidden service.

LKV373A: porting objdump

This article is part of series about reverse-engineering LKV373A HDMI extender. Other parts are available at:

After part number four, we already have ELF file, storing all the data we found in firmware image, described in a way that should make our analysis easier. Moreover, we have ability to define new symbols inside our ELF file. The next step is to add support for our custom architecture into objdump and this is what I want to show in this tutorial.

Finding best architecture to copy

If we want to set up new architecture in objdump code, we need to learn interfaces that need to be implemented. It would be easier if we can use some existing code to do so. After some looking into the binutils’ code I learned that what is of special interest are bfd and opcodes libraries. They contain code dedicated to particular architectures. The first one seem to be related to object file handling (which in our case is ELF), so we should not tinker with it too much. Second one is related to disassembling binary programs, so is what we are looking for.

I did some quick examination of source code related to popular architectures and it seems not to be easy to adjust to our needs. Architecture I found to be best suitable for modification is Microblaze. Its source seem to be quite well-written, clean and short. Also from my research of architecture name for LKV373A (part 2, failed by the way) I also remember it is quite similar to the one present in LKV373A, so it is even better decision to use it.

Compiling objdump for target architecture

At first it is useful to learn how to compile objdump, so it will be able to disassemble program written for our target. Microblaze is not really a mainstream architecture, so there aren’t many programs compiled for it available online after typing 'microblaze program elf' into usual search engine. However, I was able to find 2 of them, so I was able to verify that compilation worked. If you can’t find any, I uploaded these to MEGA, so they can serve as test cases. First one is minimal valid file, the other one is quite huge.

Compilation is very easy. The only thing that needs to be done beside usual ./configure && make && make install is adding target option to configure script. So, the script looks as follows:

./configure --target=microblaze-elf

Of course, install step can safely be skipped as well as compilation of other tools, beside objdump. objdump itself seem to be built using make binutils/objdump. However it can’t be build successfully using that shortcut, so whole binutils package must be configured the way, everything not buildable is excluded from the build.

Setting up own architecture

Next step is to add support for our brand new, custom architecture to binutils’ configuration files and copy microblaze sources, so they will simulate our architecture, until we will write our own implementation. Then it should be possible to test objdump again, against our sample microblaze programs and disassembly should still work.

Even without any modification to binutils’ source or configs, it should be possible to configure it for any random architecture. The only constraint is format of the target string: ARCH-OS-FORMAT, where FORMAT is most likely to be elf. So, if we pass lkv373a-unknown-elf as target, it will work. -unknown part is usually skipped and this will not work. If we need it to work, config.sub must be modified. config.sub is used to convert any string, passed to configure into canonical form, so in our case lkv373a-unknown-elf. If it detects, that it is already in canonical form, it does nothing.

Final configure command will be slightly more complex, as we have to disable some parts, that are not of our interest and requires additional effort to work:

./configure --target=lkv373a-unknown-elf --disable-gas --disable-ld --disable-gdb

Although passing something random as target option works on configure stage, it will obviously fail on make stage. What make is doing at first is configuring all the sublibraries. What is of our interest is bfd and opcodes. And the first one fails. So this is the first problem, we need to get rid of.

bfd/config.bfd

The purpose of this file is to set some environment variables depending on target architecture. If it does not know the architecture, it returns error to caller, which is probably bfd’s configure script, called by make. According to documentation in file header, it sets following variables:

  1. targ_defvec – default vector. This links target to list of objects that will provide support for ELF file built for specific architecture (stored in bfd/configure.ac)
  2. targ_selvecs – list of other selected vectors. Useful e.g. when we need support for both 32- and 64-bit ELFs. Not needed here.
  3. targ64_selvecs – 64-bit related stuff. Used when target can be both 32- and 64-bit, meaningless in our case.
  4. targ_archs – name of the symbol storing bfd_arch_info_type structure. It provides description of architecture to support.
  5. targ_cflags – looks like some hack to add extra CFLAGS to compiler. We don’t care.
  6. targ_underscore – not sure what it is, should have no impact on our goals (possible values are yes or no)

To sum up, what we need to do on this step is to define default vector, we will later add to configure.ac and set name of architecture description structure. The structure itself will be defined later. Finally, I ended up with the following patch:

@@ -173,6 +173,7 @@ hppa*)     targ_archs=bfd_hppa_arch ;;
 i[3-7]86)   targ_archs=bfd_i386_arch ;;
 i370)     targ_archs=bfd_i370_arch ;;
 ia16)     targ_archs=bfd_i386_arch ;;
+lkv373a)  targ_archs=bfd_lkv373a_arch ;;
 lm32)           targ_archs=bfd_lm32_arch ;;
 m6811*|m68hc11*) targ_archs="bfd_m68hc11_arch bfd_m68hc12_arch bfd_m9s12x_arch bfd_m9s12xg_arch" ;;
 m6812*|m68hc12*) targ_archs="bfd_m68hc12_arch bfd_m68hc11_arch bfd_m9s12x_arch bfd_m9s12xg_arch" ;;
@@ -924,6 +925,10 @@ case "${targ}" in
     targ_defvec=iq2000_elf32_vec
     ;;

+  lkv373a*-*)
+    targ_defvec=lkv373a_elf32_vec
+    ;;
+
   lm32-*-elf | lm32-*-rtems*)
     targ_defvec=lm32_elf32_vec
     targ_selvecs=lm32_elf32_fdpic_vec

bfd/configure.ac

Now we need to define vector, we just declared to use for lkv373a architecture.

505     k1om_elf64_fbsd_vec)         tb="$tb elf64-x86-64.lo elfxx-x86.lo elf-ifunc.lo elf-nacl.lo elf64.lo $elf"; target_size=64 ;;
506     lkv373a_elf32_vec)           tb="$tb elf32-lkv373a.lo elf32.lo $elf" ;;
507     l1om_elf64_vec)              tb="$tb elf64-x86-64.lo elfxx-x86.lo elf-ifunc.lo elf-nacl.lo elf64.lo $elf"; target_size=64 ;;

Unfortunately, as we did modifications to .ac script, we now need to rebuild our configure. From my experience, any tinkering with autohell, after solving one problem, creates 5 more. We need to get into bfd directory and reconfigure project:

cd bfd
autoreconf

Now, if it worked for you, you should definitely go, play some lottery πŸ™‚ . For me it said that I need exactly same version of autoconf as used by binutils’ developers. Because autoconf is so great, probably what I will show now is completely useless for anyone, but hacks I needed to do are at first to add:

20 m4_define([_GCC_AUTOCONF_VERSION], [2.69])

to the beginning of configure.ac file. Then bfd/doc/Makefile.am contains removed cygnus option at the beginning, in AUTOMAKE_OPTIONS, so we need to remove it. After that doing automake --add-missing, as autoreconf suggests, and then again autoreconf should solve the problem. But, as I said, this will probably not work for you. I can only wish you good luck.

(if were following the steps, you might have noticed that autoconf complained about not being in version 2.64 and we overridden version from 2.69 to 2.69 and it worked πŸ™‚ , don’t ask me, why, please!)

After this step, compilation should start (and obviously will fail miserably on bfd as it misses few symbols). Now its time to make bfd compilable.

bfd/elf32-lkv373a.c

This file is meant to provide support for custom features of ELF file. As we don’t have any, we can safely do nothing here. Good template of such file is elf32-m88k.c as it does exactly this.

One thing that seem to be important here is EM value of architecture described. EM is an enum used in ELF file to define target architecture, so it might be required to adjust in our new elf32-lkv373a.c file. By the way definition of this value have to be added to include/elf/common.h:

433 /* LKV373A architecture */
434 #define EM_LKV373A              0x373a

It might also be a good idea to add it to elfcpp/elfcpp.h. To make the file compile, it is necessary to add following to bfd/bfd-in2.h as value of bfd_architecture enum:

2398   bfd_arch_lkv373a,    /* LKV373A */

bfd/archures.c

As we declared bfd_lkv373a_arch as symbol with CPU description structure, we now need to add this declaration to archures.c, as this is the file, where it will be used. We have to add:

611 extern const bfd_arch_info_type bfd_l1om_arch;
612 extern const bfd_arch_info_type bfd_lkv373a_arch;
613 extern const bfd_arch_info_type bfd_lm32_arch;

bfd/targets.c

Similar situation is in targets.c file. Here we have to provide declaration of our vector as bfd_target. This will be another structure, which seem to be generated automatically, so we should not care about it.

704 extern const bfd_target l1om_elf64_fbsd_vec;
705 extern const bfd_target lkv373a_elf32_vec;
706 extern const bfd_target lm32_elf32_vec;

bfd/cpu-lkv373a.c

This last file, we need in bfd, provides bfd_arch_info_type structure and… that’s it! Can be easily borrowed from cpu-microblaze.c with only slight modifications. One thing that needs explanation here isΒ section_align_power. As far as I understand it, it is power of two to which the beginning of the section in memory must be aligned. It should be safe to put 0 here, as we are not going to load our ELF into memory.

This should close the bfd part of initialization. As you can see, there was no development at all to be done here. Let’s now go to opcodes library.

opcodes/configure.ac

At first we need to define objects to build for LKV373A architecture in opcodes library. This is quite similar to what we had to do in configure.ac of bfd library.

282         bfd_iq2000_arch)        ta="$ta iq2000-asm.lo iq2000-desc.lo iq2000-dis.lo iq2000-ibld.lo iq2000-opc.lo" using_cgen=yes ;;
283         bfd_lkv373a_arch)       ta="$ta lkv373a-dis.lo" ;;
284         bfd_lm32_arch)          ta="$ta lm32-asm.lo lm32-desc.lo lm32-dis.lo lm32-ibld.lo lm32-opc.lo lm32-opinst.lo" using_cgen=yes ;;

Hopefully, -dis file will be enough to be implemented. I’ve made a copy from microblaze configuration. The same way we will copy whole source file and any related headers in the next step.

Now, similarly to bfd’s configure.ac, we have to reconfigure it. And again, nobody knows what errors we will encounter.

opcodes/disassemble.c

The only thing that have to be done here is to set pointer of disassemble function. For this following snippets should be added:

53 #define ARCH_lkv373a
255 #ifdef ARCH_lkv373a
256     case bfd_arch_lkv373a:
257       disassemble = print_insn_lkv373a;
258       break;
259 #endif

And to disassemble.h:

62 extern int print_insn_lkv373a           (bfd_vma, disassemble_info *);

opcodes/lkv373a-dis.c

This is, where real stuff will happen. As our goal, for now, is not to make implementation of LKV373A architecture, but rather set everything up, so objdump will build, we can copy source file from microblaze-dis.c. It is also required to copy headers, related to MicroBlaze, used by this file, so:

  • opcodes/microblaze-dis.h
  • opcodes/microblaze-opc.h
  • opcodes/microblaze-opcm.h

And change include directives in them to link to lkv373a file, rather than microblaze ones.

Now, optionally we could change names of any symbols referring to name microblaze, but this should not be required, as original microblaze files should not be included in the build. The only change than need to be done is print_insn_microblaze into print_insn_lkv373a, as this is what we added to disassemble.c.

You should now be able to compile working objdump with LKV373A support (of course with wrong implementation, for now). We can now verify that everything works on slightly modified ELF file for MicroBlaze architecture (EM field must point to LKV373A – value must be 0x373a). Well done!

NOTE: all the steps, done till now are available on tutorial-setup tag in repository on Github.

Functions to implement

Now, finally the real fun starts. Bindings between opcodes library and objdump itself, require at leastΒ print_insn_lkv373a to be implemented.

What should happen inside this function is quite simple and can be described in following steps:

  1. Gets bfd_vma and struct disassemble_info (called info below) as parameters
  2. Read raw data containing instructions using info->read_memory_func
  3. Call info->memory_error_func in case of any errors
  4. Use info->fprintf_func to print disassembled instruction into info->stream
  5. Optionally use info->symbol_at_address_func to determine if there is any symbol declared at address decoded from instructions
  6. If symbol exists, call info->print_address_func
  7. Return number of bytes consumed

Following is some documentation, I wrote for easier implementation (mostly translated inline comments), of functions to be called:

  /**
   * \brief Function used to get bytes to disassemble
   *
   * \param memaddr Address of the current instruction
   * \param myaddr Buffer, where the bytes will be stored
   * \param length Number of bytes to read
   * \param dinfo Pointer to info structure
   *
   * \return errno value or 0 for success
   */
  int (*read_memory_func)
    (bfd_vma memaddr, bfd_byte *myaddr, unsigned int length,
     struct disassemble_info *dinfo);
  /**
   * \brief Call if unrecoverable error occurred
   *
   * \param status errno from read_memory_func
   * \param memaddr Address of current instruction
   * \param dinfo Pointer to info structure
   */
  void (*memory_error_func)
    (int status, bfd_vma memaddr, struct disassemble_info *dinfo);
  /**
   * \brief Pointer to fprintf
   *
   * \param stream Pass info->stream here
   * \param char Format string
   * \param ... vargs
   *
   * \return Number of characters printed
   */
  typedef int (*fprintf_ftype) (void *, const char*, ...) ATTRIBUTE_FPTR_PRINTF_2;
  /**
   * \brief Determines if there is a symbol at the given ADDR
   *
   * \param addr Address to check
   * \param dinfo Pointer to info structure
   *
   * \return If there is returns 1, otherwise returns 0
   * \retval 1 If there is any symbol at ADDR
   * \retval 0 If there is no symbol at ADDR
   */
  int (* symbol_at_address_func)
    (bfd_vma addr, struct disassemble_info *dinfo);
  /**
   * \brief Print symbol name at ADDR
   *
   * \param addr Address at which symbol exists
   * \param dinfo Pointer to info structure
   */
  /* Function called to print ADDR.  */
  void (*print_address_func)
    (bfd_vma addr, struct disassemble_info *dinfo);

For easier start of development, this commit can be used as template. You can find effects of implementation according to this description on lkv373a branch of my binutils fork on Github. After this step, you should have working objdump, able to disassemble architecture of your choice.

Alternative way

According to binutils’ documentation, porting to new architectures should be done using different approach. Instead of copying sources from other architectures, developers should write CPU description files (cpu/ directory) and then use CGEN to generate all necessary files. However, I found these files way too complicated comparing to goal, I wanted to achieve, therefore I used the shortcut. In reality, however, this might be a better way, as the final result should be the support for new architecture not only in objdump, but also in e.g. GAS (GNU assembler). If you want to go that way, another useful resource might be description of CPU description language.

Plans for the future

As I am now able to speed up reverse engineering of both instruction set and LKV373A firmware, I am planning to create public repository of my progress and guess operations done by some more opcodes as I already know only few of them. So, I will probably push some more commits to binutils repo as well. I hope this will enable me to gain some more knowledge about LKV373A and allow, me or someone else, to reverse engineer second part of the firmware, which seem to be way more interesting that the one, I was reverse engineering till now.