Categories
Uncategorized

Fake flash

Being a complete fanny, I managed to drop my phone and break the display assembly again.

photo5942927736622393988

I was in the pub the next day with a friend recounting my tale of woe, and as he was showing off his shiny new phone to me, it suddenly occurred to me that we used to have the same phone. I ended up buying his old phone off him for parts, and repaired my phone there and then.

photo5945285849761556663
photo5945285849761556664
photo5945285849761556665

Annoyingly, in doing so I managed to snap my SD card!
dsgdfxh

I bought a new 64GB SD card on ebay and when it arrived I decided to do a check of its integrity:

$ dd if=/dev/urandom bs=1M count=64000|pv|sudo tee /dev/mmcblk0|sha1sum -

Already things were seeming suspicious: It claimed to be a class 10 card (10MB/s) but it initially wrote (at least for the first few GB when I was paying attention) at about 2-2.5MB/s.
When I came back later, it was ~45GB in and had sped up to about 15MB/s.

I had planned to read it back and check that the data were the same:

$ sudo pv /dev/mmcblk0 | sha1sum -

But it read at about 10MB/s (rather than the advertised 50MB/s) so I lost interest and tried a different approach, writing to random locations and reading back. 54MiB in was fine, but 54722MiB in returned garbage.

I threw together a quick script to check random 1MiB blocks throughout the disk:

#!/bin/bash
dd if=/dev/urandom bs=1M count=1 of=somerandom
for i in $(seq 1 1000) 
do
	thisblock=$(shuf -i 0-63999 -n 1)
	sudo dd if=somerandom of=/dev/mmcblk0 bs=1M seek=$thisblock
	sudo sync
	sudo dd if=/dev/mmcblk0 skip=$thisblock bs=1M count=1 | diff - somerandom
	if [ $? == 1 ]
	then
		echo "bad"
		echo $thisblock >> badblocks
	else
		echo "good"
		echo $thisblock >> goodblocks
	fi
done

(I don’t know if the sync is necessary – I wasn’t sure if maybe there’d be some transparent caching in play whereby the OS would assume what it’s just written is still there.)

This gave me two lists, one of the known good blocks and one of the known bad blocks. About 3/4 of the blocks were bad, so it was looking like this 64GB was a 16GB in disguise:

 $ wc -l goodblocks badblocks 
 240 goodblocks
 760 badblocks
1000 total

I threw together a piece of octave too, to make a visualisation:

goodblocks=dlmread("goodblocks");
badblocks=dlmread("badblocks");
image=zeros(320,200);
for i=goodblocks'
	x=floor(i/200)+1;
	y=(mod(i,200))+1;
	image(x,y)=1;
end
for i=badblocks'
	x=floor(i/200)+1;
	y=(mod(i,200))+1;
	image(x,y)=2;
end
colourmap=[0 0 0; 0 1 0; 1 0 0];
imshow(image+1,colourmap);
imwrite(image+1,colourmap,"image.png")

The result:
image
Each pixel represents a 1MiB block of the SD card, top left being the start, working right then down, with the bottom right being 64GB (more accurately 64kMiB, or 64000MiB). Black pixels are unknown (the script took a random sample of 1000 blocks out of 64k), green are good (read back the same) and red are bad. This looks pretty conclusively like it’s a 16GB card in disguise.

The bad section seems to be the same 512 byte pattern repeated over and over for a bit, then the repeated pattern changes sometimes for no discernible reason. I haven’t invested much energy into figuring out why yet.

Categories
Uncategorized

Puzzlehunt 2015: Dossier


Puzzlehunt

Since 2012, CUCaTS have been running a puzzlehunt annually during Cambridge’s May Week. Usually lasting 24 hours, teams of up to three compete to solve as many puzzles as possible in the allotted time. The puzzles tend to cover a wide variety of fields, including steganography, number theory, encryption, signal processing, geometry and even linguistics. Some of them are pen-and-paper problems, some require URL hacking and the like, while some require physically going to a place to look for clues. I’ve taken part twice and helped organise it three times. And either side of the admin interface, it’s always been an amazing thing to be involved in.

The puzzle

This post is a writeup for one of the puzzles I set in 2015. This one’s called dossier and you can find it on the puzzlehunt 2015 page.

Uncharacteristically, the puzzle description isn’t that cryptic: you’re told that you need to decipher the two in-depth ciphertexts.

In other words, there are two plaintexts, let’s call them p_0 and p_1. These are both encrypted with the same key, say k for example. This means we have two ciphertexts c_0=p_0 \oplus k and c_1=p_1 \oplus k

Solution

Bitwise XORing the two ciphertexts together gives us the two plaintexts XORed together:

c_0 \oplus c_1 = ( p_0 \oplus k ) \oplus ( p_1 \oplus k ) = p_0 \oplus p_1

So if we know part of one of the plaintexts, we can work out the corresponding part of the key and, helpfully, the corresponding part of the other plaintext.

The body of the puzzle gives a few “crossword-style” clues, along with corresponding locations in the texts at which these words occur. Take the first clue, “0x014 Maniac Hat Time (13)” – this is an anagram of “mathematician”, which occurs starting at the 14th byte of the plaintext.

At this point, enter a little program I wrote to help with solving this: xorsolver. This is just a convenience thing really, and partly just a little exercise in tkinter for me. It does nothing which can’t be done with a few lines of c and a text editor. It’s also a terrible piece of code, with copy and pasted code lines all over the place.

Essentially it allows you to make guesses at parts of the plaintext and immediately see the result the other plaintext.

To start with, it initialises with a guess of the first plaintext as a long run of null bytes. The other plaintext is correspondingly nonsensical.

Screenshot_2017-02-12_18-01-53

As soon as we enter our first piece of guessed plaintext (specifying a memory location and the text at the bottom), a piece of the other plaintext reveals itself:

Screenshot_2017-02-12_18-04-03

” am writing t”. In fact we could guess that the word “mathematician” is surrounded by spaces: guessing ” mathematician ” at 0x13 yields: “I am writing th”. Now we can make a guess at the context for this. Let’s try ” I am writing this” at position 0x12.

Screenshot_2017-02-12_18-18-25

” a mathematician fr”. Guessing a bit further ahead: ” a mathematician from ” gives ” I am writing this le”. Another bit of leapfrogging ahead, and we end up with “, I am writing this letter ” and ” a mathematician from Meche”. Guessing placenames beginning with “Meche” or a word to follow “letter” might be a bit tedious, so let’s leave this section for now and pick up another clue.

Screenshot_2017-02-12_18-25-02

The clue “0x071 Not just a guess (6)” corresponds to “theory”. Guessing this at position 0x71 gives ” week.” in the other text. A decent guess at what comes before this might be ” last”. So we guess ” last week.” at 0x6C. This gives “ring theory”. A bit more leapfrogging and the furthest we can easily get with this section is ” in the fields of string theory and knot theory. He ” and “er I bought from you last week. Unfortunately, the a”.

Picking up some more clues, and doing a bit more leapfrogging, we might end up here with our two texts:

����������������� a mathematician from Meche����������������������������������������#548 in the fields of string theory and knot theory. He %,sm$6/��������������������������������������������������������������������������������������������������������������eI v probably ))-m)j9'?8l.3v��������������������������������������������������������������������tthe first reticulated pantograph ��������������������, this day to be one of the most important inventions of the 19th century in ?? ? ?  ?  theory�����������

and:

��Is$RN>A�, I am writing this letter 
NOO/��MOB'TPE�
��N�V????er I bought from you last week. Unfortunately, the a       D�EdA
�?TELFMI�
FT[@[VONHSI
IALGI
A
TCUKDtSL
	GN>O I&? however, ??????????????�HRFRM
HXVQI
NA"OD=
OHEs
�O
OJ� �' is interfering greatly with my pALNI
YP	DR ial purification and I would be grateful if you could rectify this at once. >+|#$  ",oS*v~vxE#A\�

So it looks like the two texts are a biography of some mathematician/inventor, and a complaint letter. Maybe not the juicy enemy communications we were after but let’s stick with it anyway. Incidentally, there’s no way to know which piece of plaintext corresponds to which text. You could end up with them randomly jumbled up together and have to untangle them – I’ve cheated a bit here since I know both of the texts anyway – although I did still make one mistake along the way and have to correct it. The only way to deal with this is through the context in the surrounding text. If the texts were formed of a load of independent characters, de-interlacing them would prove futile. But generally the structure of words and sentences, and the general style of the piece helps with decoding.

Anyway, using this higher level context, we might guess that the complaint letter would begin with a greeting. With a bit of guesswork we find that “Dear Sir or Madam,” fits perfectly. We end up with a good chunk of plaintext of the mathematician’s biography. “Henri Martens was a mathematician from Meche”.

The rest of the solution proceeds in the same way, using the clues given and guessing pieces of the text. Eventually you end up with the two plaintexts:
“Henri Martens was a mathematician from Mechelen, Belgium. He pioneered numerous advances in the fields of string theory and knot theory. He was awarded the Donald Knuth prize for computing in 1842 for the invention of the impersonal computer. This amazing feat, however, was surpassed shortly after in 1843 when Lord Timpson of Skegness demonstrated the the first reticulated pantograph — widely regarded to this day to be one of the most important inventions of the 19th century in the field of applied maths.”

and:
“Dear Sir or Madam, I am writing this letter to complain about the four dimensional printer I bought from you last week. Unfortunately, the arm meant to move it in the temporal dimension is defective and is stuck travelling forward at the speed of light. You are probably aware that this renders my machine into an ordinary three dimensional printer. This is interfering greatly with my plans for peaceful racial purification and I would be grateful if you could rectify this at once. Sincerely, Abratma Gandhler”

The theme of the latter was intended to change to something a little more lighthearted, along the lines of “…my plans for the enslavement of all of lifekind on earth…” but the wrong files got uploaded in the end.

Writing the puzzle

If I recall correctly, this puzzle was written at around 5AM during the 2014 puzzlehunt. Once I had the idea, two of us separately sat down to write 512 bytes of nonsense. Afterwards, I checked to see how they aligned, in terms of difficult sections matching up. The “leapfrogging” technique can reach a frustrating stall if both ciphertexts reach a difficult-to-guess bit at the same time. It’s much harder to figure that “shortly after in” is followed by “1843 when Lord Timpson of Skegness”, but if you see “I’m writing this letter”, then “to complain about the” isn’t a particularly outlandish guess. So by adding/removing filler in one text, I was able to adjust the difficulty of the puzzle. I also decided to give plenty of clues to give multiple angles of a attack for difficult sections.

Here I’ve coloured the two texts roughly by predictable filler text (green) and stupid high-entropy stuff (red). The clues are also marked in cyan.
plaintexts

You can see that the three sections where red occurs on both lines simultaneously have clues relatively nearby.

In the end, nobody reached this puzzle in 2014 and it was rolled over to the 2015 puzzlehunt. It proved quite popular, with several teams eventually submitting correct answers. If this seems like your kind of thing, sign up to the CUCaTS mailing list to get more information closer to the time.

Categories
Uncategorized

Bullshit on the Internet

You see a lot of bullshit on the internet. Some of it genuine scams, some of it well-intentioned but ultimately misguided.

Well, here’s something I’m not quite sure what to make of: the Eco-Cooler.

The video demonstrates a design of “air conditioning” requiring no electricity: instead just relying on the flow of air through bottles. It’s one of those things where at first glance (at least if you watch their explanation) it looks plausible, then you think about it a bit more and go “wait, what?”

Their snazzy video explains… blow on your hand with your lips pursed and it feels cooler than if you have your mouth wide open.
ecocooler-screenshot

That, they say, is how their cooler works.

ecocooler-screenshot2

Air blows through a bottle, it gets compressed, and that cools it down. A nice idea, but unfortunately thermodynamics and fluid mechanics don’t work like that.

Let’s look at the first example – air blown through pursed lips. Why does it feel colder? The answer is simply that it’s faster. Faster fluids transfer more heat, because they keep replacing the warm air around you. On a cold day, you warm up a small “bubble” of air around you. When the wind blows, it blows that away and suddenly you’re surrounded by cold air again. The faster that wind, the less of a “bubble” of warm air you can maintain and the colder you feel. But the air you’re blowing on your hand isn’t any colder because you’ve blown it through pursed lips.

Refrigeration Basics

So what about “the bottle neck compresses and cools the air”? This seems like a massive misunderstanding of how a refrigeration cycle works. So let’s start there.

The way conventional air-conditioning works is that it takes some working fluid (1) and compresses it (2), using a pump. The pressure increases and, crucially, the volume decreases. The same amount of heat* in a smaller space is going to get hotter. If you dump this heat somewhere (3) and bring the compressed fluid down to ambient temperature, it now has less heat than it started with. This is what’s happening in the compressor outdoors when you have an air con system and why it blows out hot air. Since the now cool-ish fluid is still compressed, you can expand it again. When you do, that same amount of heat* spreads out across a larger volume, and so you end up colder than you started. This is what’s happening indoors where the cold air comes out.
(*Actually the heat changes a bit because of the work done on or by the fluid, but that’s unnecessary complication and isn’t crucial to understand the general idea. I thought I should say it anyway for the sake of intellectual honesty.)
all-numbered

So is that what’s happening here? No. The key thing to note about a refrigeration (or air-conditioning) system is that no heat is destroyed. The inside’s made colder at the expense of hot air being blown out of the compressor. So, if this eco-cooler does indeed cool air, where does the heat go?

Could we potentially make this work if we extracted some heat at the bottle neck, and dissipated that to the outside environment?

Flow Analysis

Enter everyone’s favourite often-misused equation of fluid mechanics: Bernoulli.

Bernoulli’s principle is simply a statement that along a streamline, energy is conserved. This means that anywhere along the flow, \frac{v^2}{2}+\frac{p}{\rho} is the same. As air travels through one of the bottles, it will indeed have to speed up as it enters the mouth, because of conservation of mass and therefore volume (subsonic air flow is essentially incompressible). By Bernoulli, this causes pressure to decrease and therefore, according to the ideal gas law, the temperature to change slightly.

How much does the temperature change? Let’s say air enters at 5m/s. That’s a fair breeze. Looking at a drinks bottle, i’d guess the mouth is about 1/5 the diameter of the bottle, so the air has to accelerate to 25 times its starting speed to get through. Let’s also assume an air density of 1.2kg/m³ and ignore the obvious ludicrousness of air travelling through a bottle neck at 125m/s (in reality viscosity would mean a lot more energy gets turned to heat).
Plugging the numbers in, the gauge pressure in the mouth is:
\rho \left(\frac{v_1^2}{2}-\frac{v_2^2}{2} \right) = 1.2 \left(\frac{5^2}{2}-\frac{125^2}{2} \right) = -9.4kPa

From the ideal gas equation, air at 45 degrees will be around 110kPa:
P=\rho \overline{R}T=1.2 \times 287\times(45+273)=109.5kPa

Taking off the gauge pressure and doing the same in reverse gives a temperature of 18 degrees. The air in the mouth is actually colder! This isn’t what we want for a refrigeration cycle at all — if anything this colder air would absorb heat from the environment and be even hotter than it started when it leaves the mouth and returns to atmospheric pressure. We’ve made a very shitty heat pump.

What is this thing?

So if this isn’t actually going to do any refrigeration, what is it? And why does it cool the building by 5 degrees as they claim in the video?

My guess is that their test is unfair or unrepresentative. Perhaps they were comparing no ventilation with this, rather than comparing this with… well, just a window. Just the circulation of cooler air from outside is going to help to some extent, but no more than an open window. But they don’t seem to have any information beyond what’s in the video.

I struggle to work out what the motivation is here. Initially it seems like a solar-freakin-roadways sort of thing: well-intentioned but fatally flawed. But although there’s nothing being sold here, the video does mention two sponsors, so perhaps this is just a publicity stunt for them?

They seem to have a website which is under construction at the moment, so perhaps we’ll find out soon…

Categories
Uncategorized

Nixie Tube Clock

Today, the story of Ahmed Mohammed has been doing the rounds. I suppose you could even say it’s gone “viral”, as I believe is the modern parlance. A tl;dr for those who missed it: 14-year old kid and electronics enthusiast takes a homemade digital clock into school to show to his teachers. Far from being impressed at his ingenuity and enthusiasm for technology, they decide it looks too much like a bomb and so the police are called. All of which begs the question, would this have happened if he was white? A certain clickbait news website picked up on this idea with a headline “7 Kids Not Named Mohamed Who Brought Homemade Clocks to School And Didn’t Get Arrested”, upon reading which I realised — hey, that’s me! So I thought this seemed as good a time as any to dig out the old photos of the nixie tube clock I made back in 2010 — something I’ve been meaning to do for a while.

And yes, I took this into school to show my electronics teacher. And no, I wasn’t arrested!

Image0499
This is a nixie tube. It’s a neon gas filled tube containing an anode grid and several symbol (in this case number) shaped cathodes. When a negative voltage is applied to one of the cathodes, the neon in its proximity is excited and glows. It’s a very pretty type of display since the symbols are bent out of metal rather than formed of straight line segments, so the numbers have a very natural and appealing shape. The tubes I used were soviet surplus ИН-16 tubes.

Image0504
For the physical design of the clock, I wanted to go for a slightly steampunk look, using copper pipes. I soldered this up out of 15mm plumbing pipe and joints, and the idea was that I’d thread all the wires through the pipes so the tubes were sticking out the top. Ultimately I planned to make a wooden box — stained mahogany — for the electronics with this sticking out of the top.
Trying to fit all of those wires through 15mm pipe was a real pain!

Image0518
To drive them, I commoned up all of the anodes and put a high voltage NPN transistor on each cathode.
In hindsight, I could have grouped the cathodes and had a high side switch to each anode, then scanned through them, much like one would do with a multi-digit seven segment display (check out the wiring of these 4-digit displays for example) but I didn’t think of that at the time and probably wouldn’t have been able to manage the gate drive anyway. (Give me a break, I was still a high school kid at the time!)

Image0523
Here I am testing the drive circuit. I know that isn’t a real time, I just picked random digits! It was powered here (and indeed onwards) from a crude mains isolation setup: I stepped the mains down with a transformer to low voltage DC, and took two taps off that: one went through an identical transformer in reverse to step it back up to isolated 230VAC (then rectified and smoothed) and the other went to a rectifier/regulator circuit to provide 5VDC for the digital electronics.

Image0565
At this point the photo album was inexplicably interrupted by a photo of one of the mice I had at the time. She gave birth to 13 pups shortly after.

Image0554
Here’s the circuit with some of the digital electronics. A PIC microcontroller was the brains of it. (The colourful twisted wires are for in-circuit programming.) It drove the 10’s of hours and 10’s of minutes directly, whereas the digits, to save on microcontroller outputs, were connected up to 4017B decade counters. These were clocked and reset by the PIC, requiring only two outputs each.

Image0570
It took pride of place on our mantelpiece for 24 hours to test for mistakes in my code, and that it kept time reasonably well. The speaker was added to make it “tick”. You can also see the crude isolation transformer setup on the left.

My plan next was to make a PCB for the circuit and make a box for it to go in. But after a brief trip into school to show it off to my teacher, I took it into London Hackspace to work on it — and perhaps solicit some advice on the woodwork side of it — but shortly after that it disappeared! Perhaps it got deleted in a fit of rage tidying, who knows…

Looking back, it’s kind of embarrassing to see how past me did things. Notably the mass-of-wires breadboarding and the bombsite of a workbench while working with mains voltage. There are plenty of things I’d do differently too – probably an SMPS to drive the nixies to save on transformers, commoning up the cathodes and scanning the display, using an RTC (this just used the PIC’s built in clock so kept time really badly!) and in hindsight that copper pipe should /really/ have been grounded, with the single insulated cable running through it at 300VDC. But hey, you live and learn. Five years on, I’m older and wiser. Nothing now remains of this project but the photos and an unused bottle of mahogany wood stain. The hardware all went missing, the code is long gone and the schematics only live on as vague recollections in my head. And it will probably remain so — I have little motivation to make a clock now (where would I put it? why would I need one practically everything shows the time these days?) and plenty of others have done nixie clocks such that they’re almost commonplace nowadays, so the novelty of it has somewhat worn off…

Categories
Uncategorized

pwgen-h

xkcd password strength comic

Inspired by xkcd #936 (reproduced above, credit to xkcd), I threw together a quick python script to generate correct horse battery staple style passwords.

pwgen-h:

#!/usr/bin/python3
import secrets,sys
dictfile=open('wordlist')
dictlist=[]
for a_line in dictfile:
dictlist.append(a_line)
for x in range(int(sys.argv[1])):
print(secrets.choice(dictlist).strip(),end=" ")
print("")

(edit: I’ve updated this as of Jan 2022: when I first wrote this, I used the python “random” module, which isn’t secure. Since then, the “secrets” module has been added to python from 3.6, which generates secure randomness, so I’ve updated it to use that)

The name comes from the unix pwgen utility, but -h because it generates “human readable” passwords.

It needs to be fed a file “wordlist” containing one word per line for it to choose from. You can use mine (roughly the 5000 most common english words) or your own. It’s then invoked as pwgen-h num_of_words

Example invocations and outputs:

$ ./pwgen-h 6
starting give progress limit accommodate code
./pwgen-h 4
gravity Latin convenience exclude

The wordlist I used is about 5000 words, so a resulting 6-word password has an entropy of 6 \times log_2{5000}=73.7 bits. That means that there are around 10^{22} possibilities to brute force. At a million guesses per second, it would take in the order of a billion years to check every possibility.

This has the caveat though that you have to take the first password it offers you* for the working above to be valid. If you keep trying it until you see a password which you like, you’re reducing your entropy in a difficult to quantify way.

*Or rather, don’t decide based on the output whether to use it. Of course, you can play around with it as much as you like, but you should decide which password you’re going to use before it’s generated for maximum security.

Categories
Uncategorized

Measuring Capacity of Constellations

Having just the other day finished off my fourth year project (and in doing so, completed my degree! hooray!), I thought I’d share a little tidbit from it which perhaps might be useful to someone else.

Background

Part of the project involved QAM constellation shaping. That is, moving points around in a constellation — in this case, to increase the capacity.
A typical QAM constellation would look something like this, shamelessly taken from wikipedia (created by user Splash):
qam16

Whereas here is an example of a shaped constellation which I investigated (coloured by decision region for hard decision decoding):
64qam-nonsquare

Capacity

Comparing these constellations required that I be able to measure their capacities. Here I’m considering a very simple system:
system
It’s simply a two-dimensional AWGN channel.

Grid Evaluation

This is the first method I tried. It didn’t work very well, and I’ll explain why.

The capacity of the channel is definied by the mutual information between the input and the output:
I(Y;X)=H(Y)-H(Y|X)

Here, H(Y|X)=H(X+N|X)=H(N), which is simply the entropy of the gaussian distribution and has an analytic form. In this case, where the gaussian is bivariate and isotropic (that is, the \Sigma matrix is diagonal), this becomes:
1+log(2\pi\sigma^2)

H(Y) on the other hand is a nastier affair – it’s the entropy of the noisy signal, or that of a sum of gaussians, -\iint{f_Y(y)log(f_Y(y))dy}. This has no closed form so instead the approach I took was to sample f_Y(y) over a grid, and then sum across it for -\sum{f_Y(y)log(f_Y(y))}

This worked quite nicely for some signals, like this one: pdf-good
Most of the probability mass is included in the sampled grid (in fact it sums to 0.9922 here), so the summation serves as a good approximation to the integral.

But, at low SNR, a lot of probability mass spills over the sides:
pdf-bad
Here the probability mass inside the grid only sums to 0.6353. This gave utterly meaningless results (as I’ll show later). I also tried re-scaling the probabilities so that they’d sum to 1, but ultimately this didn’t help either.

Monte Carlo Method

When I mentioned this to my supervisor, he mentioned another possible method, using Monte Carlo instead.

First, flip the mutual information expression around and consider it the other way:
I(X;Y)=H(X)-H(X|Y)

In this case H(X) is easy — X is drawn equiprobably from an alphabet size of M. So H(X)=log(M).

H(X|Y) is the tricky part here, but… we can find it by Monte Carlo.
Pick a load of random points from the joint f_{XY}(x,y):
montecarlo
(This is an example from one of my optimised constellations)

Then for each point, -log(p_{X|Y}(x|y) is calculated and the average is taken. Since the process is ergodic, this tends to the expectation, -\mathbb{E}log(p_{X|Y}(x|y))=H(X|Y)

Comparison

The Monte Carlo method produced much more sensible results:
allmethods

Where the grid evaluation method gives meaningless negative capacities at low SNR, the Monte Carlo method gives results tending gracefully to zero.

In hindsight, this could probably have also been done using the original capacity expression, but calculating H(Y) by Monte Carlo… I’ll leave that as an exercise for the reader.

Results

Just for a little bit of fun, here’s a few constellations of different orders compared using the method above.
comparison

That’s about it for this post. I’ve run out of enthusiasm to write any more. In fact I did about half an hour ago. Maybe it shows. I almost gave up and deleted the post but I figured I’d finish it in case anyone cared. Do let me know in the comments if you found this useful! It’s good to know what posts interest people, and it helps for motivation too…

Here’s my MATLAB code for the two implementations. It’s not particularly tidy and the call to mvnpdf breaks in octave, but perhaps it will be useful as a reference for someone:
ConstellationInformation.m, ConstellationInformationMC.m

Categories
Uncategorized

Reading BBC ROMs with an mbed

A quick post this time. I’ve had an mbed (LPC1768) lying around for ages which I’d never gotten around to doing anything more than blinking an LED with, and I also — for reasons too boring to go in to here — have some BBC computer ROMs which I don’t know much about. I was interested to know what was on them though, so I’m just going to say a little bit about how I used the mbed to read them.

IMG_9003

The ACORNOS and ACORNBASIC ROMs look legit, but as for the others… they appear to be EPROMs. These are chips which could be electrically written, and erased using UV light. To prevent accidental erasure you had to put a sticker over the window on the chip! I wonder if this could be an early case of pirated software? Or maybe it was common for software to be distributed on EPROMs? Since this is from over a decade before I was even born, I’m not really sure!

The ROMs were all of different part numbers but the first one I picked out was a 27C128 so I put together a reader on breadboard built around this. I figured the other chips must be all pin compatible anyway and a quick google for datasheets confirmed this. The circuit is super-simple: just connecting the chip to the 5v supply rails, pulling WE high (it’s an active low line), and connecting the data, address, CE and OE lines to the mbed’s GPIOs. I used all but one of the GPIOs. Cutting and stripping all of the wires was very tedious! The ROM chips need a 5v supply but have an input logic high level according to the datasheet of 2v. The mbed uses 3.3v logic but its inputs are 5v tolerant so it looks like we’re on to a winner.

I don’t use breadboard much so I only had one reel of solid core wire to hand. I hope you like green!

IMG_9005

Next step was writing software. The way to read the chip is summarised by the read waveforms from the datasheet:
readwaveforms

Essentially, you set the data address on the address bus, clock OE and CE low (they’re active-low lines), wait at least some minimum delay then read the data off the output bus. Afterwards, you clock OE and CE back high (in fact, I think you can leave one low all the time but I didn’t bother) and wait for the output to go back to tristate mode before repeating for the next word. The datasheet allowed ~250ns delays but just to be safe I used 1µs. Probably a good idea to allow for chips with different minimum timings anyway. Plus it would turn out later that the serial transfer is the bottleneck, rather than reading the chip.

Aside from that, the only thing to say about the code is that transferring the data to the computer over serial behaved a bit strangely. If I used Serial.putc() or Serial.printf(“%c”) on the data, I’d end up short of the expected 16KiB at the PC end. But if I used Serial.printf(“%d\n”), I got exactly the 16384 lines I expected. So instead of spending ages trying to debug this, I opted to use the decimal ASCII transfer and convert at the PC end.

The chip contents were just grabbed using a pipe:

 $ pv -s 56000 > romcontents-decimal

This gave a fairly useless decimal output (example from the ACORNOS ROM):

76
31
128
76
233
188
64
14
0
66

…etc etc.

So I threw together a quick bit of python to convert from the decimal ASCII to the raw binary:

#!/usr/bin/env python3
import fileinput,struct,sys
for line in fileinput.input():
	thisnum=int(line)
	thisbyte=struct.pack('B',thisnum)
	sys.stdout.buffer.write(thisbyte)

Piping the decimal through this gave the raw ROM contents – exactly 16KiB! This could be viewed with hexdump. Here’s a couple of examples:

alex@Apollo:~/bbcroms$ hd graphics-bin |head
00000000  00 00 00 4c 49 80 82 21  a0 47 52 41 50 48 49 43  |...LI..!.GRAPHIC|
00000010  53 20 45 58 54 45 4e 53  49 4f 4e 20 00 31 2e 30  |S EXTENSION .1.0|
00000020  32 00 28 43 29 31 39 38  33 20 43 4f 4d 50 55 54  |2.(C)1983 COMPUT|
00000030  45 52 20 43 4f 4e 43 45  50 54 53 2d 50 61 75 6c  |ER CONCEPTS-Paul|
00000040  20 48 69 73 63 6f 63 6b  a1 08 2c 80 02 30 13 c9  | Hiscock..,..0..|
00000050  02 d0 11 2c cb 0c 10 08  98 9d f0 0d 8d f0 0c c8  |...,............|
00000060  a9 02 28 60 48 8e df 0c  8a 48 98 48 ba bd 03 01  |..(`H....H.H....|

 


Most of the ROMs had some plaintext descriptor telling you the software and its version.

This segment from AcornOS is quite interesting – a few familiar names here!

00003bf0  91 4c f4 ff 20 3a 53 45  54 53 44 46 20 4c 44 41  |.L.. :SETSDF LDA|
00003c00  bb c0 28 43 29 20 31 39  38 31 20 41 63 6f 72 6e  |..(C) 1981 Acorn|
00003c10  20 43 6f 6d 70 75 74 65  72 73 20 4c 69 6d 69 74  | Computers Limit|
00003c20  65 64 2e 54 68 61 6e 6b  73 20 61 72 65 20 65 78  |ed.Thanks are ex|
00003c30  74 65 6e 64 65 64 20 74  6f 20 74 68 65 20 66 6f  |tended to the fo|
00003c40  6c 6c 6f 77 69 6e 67 20  70 65 6f 70 6c 65 2c 20  |llowing people, |
00003c50  63 6f 6d 70 61 6e 69 65  73 20 61 6e 64 20 6c 6f  |companies and lo|
00003c60  63 61 74 69 6f 6e 73 2c  20 63 6f 6e 74 72 69 62  |cations, contrib|
00003c70  75 74 6f 72 73 20 28 61  6d 6f 6e 67 20 6f 74 68  |utors (among oth|
00003c80  65 72 73 20 74 6f 6f 20  6e 75 6d 65 72 6f 75 73  |ers too numerous|
00003c90  20 74 6f 20 6d 65 6e 74  69 6f 6e 29 20 74 6f 20  | to mention) to |
00003ca0  74 68 65 20 64 65 76 65  6c 6f 70 6d 65 6e 74 20  |the development |
00003cb0  6f 66 20 74 68 65 20 42  42 43 20 43 6f 6d 70 75  |of the BBC Compu|
00003cc0  74 65 72 3a 44 61 76 69  64 20 41 6c 6c 65 6e 2c  |ter:David Allen,|
00003cd0  42 6f 62 20 41 75 73 74  69 6e 2c 52 61 6d 20 42  |Bob Austin,Ram B|
00003ce0  61 6e 6e 65 72 6a 65 65  2c 50 61 75 6c 20 42 6f  |annerjee,Paul Bo|
00003cf0  6e 64 2c 41 6c 6c 65 6e  20 42 6f 6f 74 68 72 6f  |nd,Allen Boothro|
00003d00  79 64 2c 43 61 6d 62 72  69 64 67 65 2c 43 6c 65  |yd,Cambridge,Cle|
00003d10  61 72 74 6f 6e 65 2c 4a  6f 68 6e 20 43 6f 6c 6c  |artone,John Coll|
00003d20  2c 43 6f 6d 70 75 74 65  72 20 4c 61 62 6f 72 61  |,Computer Labora|
00003d30  74 6f 72 79 2c 43 68 72  69 73 20 43 75 72 72 79  |tory,Chris Curry|
00003d40  2c 44 65 73 69 67 6e 65  72 73 20 6f 66 20 74 68  |,Designers of th|
00003d50  65 20 36 35 30 32 2c 4a  65 72 65 6d 79 20 44 69  |e 6502,Jeremy Di|
00003d60  6f 6e 2c 54 69 6d 20 44  6f 62 73 6f 6e 2c 4a 6f  |on,Tim Dobson,Jo|
00003d70  65 20 44 75 6e 6e 2c 50  61 75 6c 20 46 61 72 72  |e Dunn,Paul Farr|
00003d80  65 6c 6c 2c 46 65 72 72  61 6e 74 69 2c 53 74 65  |ell,Ferranti,Ste|
00003d90  76 65 20 46 75 72 62 65  72 2c 4a 6f 6e 20 47 69  |ve Furber,Jon Gi|
00003da0  62 62 6f 6e 73 2c 4c 61  77 72 65 6e 63 65 20 48  |bbons,Lawrence H|
00003db0  61 72 64 77 69 63 6b 2c  44 79 6c 61 6e 20 48 61  |ardwick,Dylan Ha|
00003dc0  72 72 69 73 2c 48 65 72  6d 61 6e 6e 20 48 61 75  |rris,Hermann Hau|
00003dd0  73 65 72 2c 48 69 74 61  63 68 69 2c 41 6e 64 79  |ser,Hitachi,Andy|
00003de0  20 48 6f 70 70 65 72 2c  49 43 4c 2c 4d 61 72 74  | Hopper,ICL,Mart|
00003df0  69 6e 20 4a 61 63 6b 73  6f 6e 2c 42 72 69 61 6e  |in Jackson,Brian|
00003e00  20 4a 6f 6e 65 73 2c 43  68 72 69 73 20 4a 6f 72  | Jones,Chris Jor|
00003e10  64 61 6e 2c 44 61 76 69  64 20 4b 69 6e 67 2c 44  |dan,David King,D|
00003e20  61 76 69 64 20 4b 69 74  73 6f 6e 2c 50 61 75 6c  |avid Kitson,Paul|
00003e30  20 4b 72 69 77 61 63 7a  65 6b 2c 50 65 74 65 72  | Kriwaczek,Peter|
00003e40  20 4d 69 6c 6c 65 72 2c  41 72 74 68 75 72 20 4e  | Miller,Arthur N|
00003e50  6f 72 6d 61 6e 2c 47 6c  79 6e 20 50 68 69 6c 6c  |orman,Glyn Phill|
00003e60  69 70 73 2c 4d 69 6b 65  20 50 72 65 65 73 2c 4a  |ips,Mike Prees,J|
00003e70  6f 68 6e 20 52 61 64 63  6c 69 66 66 65 2c 50 65  |ohn Radcliffe,Pe|
00003e80  74 65 72 20 52 6f 62 69  6e 73 6f 6e 2c 52 69 63  |ter Robinson,Ric|
00003e90  68 61 72 64 20 52 75 73  73 65 6c 6c 2c 4b 69 6d  |hard Russell,Kim|
00003ea0  20 53 70 65 6e 63 65 2d  4a 6f 6e 65 73 2c 47 72  | Spence-Jones,Gr|
00003eb0  61 68 61 6d 20 54 65 62  62 79 2c 43 68 72 69 73  |aham Tebby,Chris|
00003ec0  20 54 75 72 6e 65 72 2c  41 64 72 69 61 6e 20 57  | Turner,Adrian W|
00003ed0  61 72 6e 65 72 2c 57 69  6c 62 65 72 66 6f 72 63  |arner,Wilberforc|
00003ee0  65 20 52 6f 61 64 2c 52  6f 67 65 72 20 57 69 6c  |e Road,Roger Wil|
00003ef0  73 6f 6e 2c 41 6c 61 6e  20 57 72 69 67 68 74 2e  |son,Alan Wright.|

In case the contents of these ROMs are useful to anyone, here’s a tarball of all of them. It contains:

  • ACORNBASIC, unknown version, 1981
  • ACORNOS 032 (?) 12-Aug-1981
  • MOS+ 1.15, 3-May-1988
  • DFS 2.26, 1985
  • Computer Concepts Graphics Extension 1.02, 1983

Since I have no use for these chips, I’m probably going to donate them to the Centre for Computing History. Check them out if you’re in Cambridge!

Categories
Uncategorized

An 88 Byte Bare-Metal Mandelbrot Generator

A while back, I was playing around with a bit of 8086 assembly. I’m not much of a programmer, but I’ve done some C{,++}, Python, PHP (sorry) and bits of a few others in the past. All of these are quite high level though, and I felt like going back to basics a bit. I chose to start with MS-DOS, since I thought (perhaps incorrectly?) that it would be easier to program. And I figured it would probably be a nice bit of computing history to learn anyway.
I soon moved on from MS-DOS and decided to set myself a programming challenge of a bare-metal Mandelbrot generator. In this post, I’ll give a bit of background about the Mandelbrot set, and explain my code.

MS-DOS

I started off with this MS-DOS assembly tutorial. It seems to be broadly unchanged since it was first crawled by the Wayback Machine in 2002. Nice. I chose to use NASM instead of A86 but otherwise the tutorial’s pretty good. If you’re lazy, here’s a virtualbox image, with MSDOS 6.22 and NASM installed. My code is all in C:\PROGRAMM, in case anyone wants to use it as a reference.

This was fun for a while, but eventually I got bored of DOS programming and decided to try some bare metal stuff – and somehow, perhaps vaguely influenced by demoscene style demos, I ended up setting myself the challenge of writing a bare-metal Mandelbrot generator within a 512-byte bootsector.

The Mandelbrot Set


The Mandelbrot Set is the set of points, c, on the complex plane such that the sequence zn+1=zn2+c (where z0=0) remains bounded. To clarify, here’s an example:

  • Let’s check if the point (-0.5+0.5j) is in the set. Set c=(-0.5+0.5j)
  • Now let’s perform the first iteration. (-0.5+0.5j)²+(-0.5+0.5j)=-0.5
  • And another iteration: (-0.5)²+(-0.5+0.5j)=-0.25+0.5j
  • And another: (0.25+0.5j)²+(-0.5+0.5j)=-0.6875+0.25j
  • We can keep on iterating again and again like this, and we find that the numbers stay small. Eventually, we might conclude that the sequence is bounded, and the point is therefore in the set.

In fact, if we plot each iteration, we can see that it’s going around in a strange, almost spirograph-like shape:

But overall, the general trend seems to be going around in a closed circle, rather than shooting off to infinity, so it would probably be fairly safe to say here that the sequence is bounded. If you’re not convinced, here’s a plot of the magnitude at each iteration:

To take another example, let’s try it with the point (0.5,0.5). The iterations go:

  • 0.5+1j
  • -0.25+1.5j
  • -1.69-0.25j
  • 3.28+1.34j
  • 990294279-77569977j
  • So, this looks like it’s shooting off to infinity… and fast.

    But this raises an interesting question: at what point do we decide whether the sequence is or isn’t bounded? In general we’d have to continue iterating to infinity to know for sure. Perhaps we should set a threshold, beyond which we declare it to be unbounded? But where should it be? And how do we know that sequences which exceed that threshold aren’t coming back?

    Well, as it turns out, it can be proven (see A6d) that if any point leaves a circle of radius 2, (ie |zn|>2), it will never return. So a reasonable approach is to compute a fixed number of iterations, and then check whether the result is less than 2 in absolute value.

    Here are two simple implementations I wrote just to demonstrate the idea. First, this is for the C++-minded people:

    #include <iostream>
    #include <complex>
    using namespace std;
    int main(){
    	cout << "P1 800 800 ";
    	for(int i=0;i<800;i++){
    		for(int j=0;j<800;j++){
    			complex<float> c((j-400)/200.0,(i-400)/200.0); //scale the indicies to floats in the range -2..2
    			complex<float> z(0,0); // initialise z to 0
    			for (int iter=0;iter<64;iter++) //iterate 64 times
    				z=z*z+c;
    			cout << (abs(z)<2)?"1 ":"0 "; //output a black pixel if in the set, white otherwise
    			}
    		}
    	cout<<endl;
    	return 0;
    	}

    This goes through each pixel one-by-one and for each one runs 64 iterations. The output is a PBM file to stdout, so you’ll have to pipe it and open it in a compatible viewer (GIMP works) or convert it to a more common format.

    Or, if you’re more of a MATLAB fan:

    c=ones(800,1)*linspace(-2,2,800);
    c=complex(c,c');
    z=zeros(800,800);
    for i=1:50
    	z=z.^2+c;
    end
    imshow(abs(z)<2)

    The same idea as the C++ example, but using vectorisation rather than iterating through pixels, and using MATLAB’s internal imshow command to display the image.

    The Bare-Metal Version

    With all that setting the scene done, let’s get down to how I wrote an 88-byte 8086 bare-metal mandelbrot generator.

    A note on arithmetic

    The general structure of the C++ example can be carried over, but the arithmetic gets difficult. Firstly, the complex number datatype in C++ is an abstraction. The processor doesn’t understand complex numbers (not an 8086 at least). So we’ll have to treat them as two reals. Addition and subtraction are trivial. For multiplication we just have to remember that:
    (a+bj)(c+dj)=(ac-bd)+(ad+bc)j
    and specifically:
    (a+bj)^2=(a^2-b^2)+2abj

    This is conceptually not too difficult, but it does mean that complex operations take up a lot of program space.

    The second thing to consider is that floating point operations don’t exist in 16-bit intel. We’re going to have to work in integer instructions instead. With a little bit of trickery though, we can create a very simple fixed-point system. Essentially, we just represent every number as a fixed multiple of its true value. Say, this could be 10. Addition (and by extension, subtraction) is consistent:
    (10a)+(10b)=10(a+b)

    Multiplication works too, with the caveat that we have to divide by the scale factor after:
    \frac{(10a)(10b)}{10}=10ab

    It’s for this reason that it’s best for that multiple to be a power of two: this reduces the division to a right shift.
    For example, here is how we might multiply two numbers with a scaling of 1/32:

    mov al,73 ; move the number 2.28 (73/32) into al
    mov bl,-193 ;move -6.03 (-193/32) into bl
    imul bl ; multiply bl (implicitly, with al). stores result in ax.
    sar ax,5 ; shift right by 5 places to divide by 2^5=32

    Note the use of sar rather than shr to preserve the sign.

    The code

    So, without further ado, let’s start deconstructing my code, a section at a time:

    [bits 16]
    [org 7c00h]
    mov ax,0fh
    int 10h

    First a bit of initialisation – instructions to the assembler that the code should be 16-bit, and will be loaded into memory at location 0x7c00.
    Then, we call BIOS interrupt 10h with AH=0 (change video mode) and AL=0fh (640×350 mono resolution).

    This video mode will fit a nice big 512×256 canvas on the screen.

    Total assembled size so far: 5 bytes

    mov dx,-128 ;start on the first line
    yloop:
    mov di,-256 ;go to the beginning of the line
    xloop:
    ;set z=c initially - start on iteration 1
    mov si,di
    mov bp,dx
    xor ax,ax ; set colour to black (al=0) and iteration counter to zero (ah=0)

    So, here we initialise the two variables for our c value. DX will contain the y value and DI the x value. I’m starting at y=-1 and x=-2, using a scaling factor of 1/128. My canvas is to be y from -1 to 1 and x from -2 to 2. The z variable is stored with its real part in SI and its imaginary part in BP. We skip out the zeroth iteration here by initialising z=c.

    Later, the interrupt to write a pixel to the screen will read the pixel value from AL. This wants to be initialised to zero (black). If the pixel is in the set, this will be changed to one (white) later. Otherwise it will be unchanged.
    At the same time, I want to set an iteration counter to zero. Sneakily, I’ve chosen to use the other byte of AX for this, so that I can set both the pixel value and the iteration counter to zero in a single instruction. This single xor saves two bytes vs two 8-bit literal moves.

    This block: 12 bytes, So far: 17 bytes.

    iterate:
    ;calculate Im(z^2)=2xy
    mov cx,bp
    imul cx,si
    jo overflow
    sar cx,6 ;maybe mov 6 into a register first?
    ;cx contains Im(z^2)

    At each iteration step we need to calculate z_{n+1}=z_n^2+c. To do this we need to calculate the real and imaginary parts of z_n, then add these to the real and imaginary parts of c.

    As the comment suggests, here we’re calculating Im(z_n^2), which as I showed previously is 2Re(z_n)Im(z_n). We copy Im(z_n) into CX, since we’ll need it again later and we don’t want to clobber it. Then we multiply this by SI (Re(z_n)) and store the result in CX. The value in this register should now be 128Re(z_n)Im(z_n) (note what I said about fixed-point multiplication earlier). Ignoring the conditional jump for now, we would then want to divide by 128 to get Re(z_n)Im(z_n). Except what we want is actually 2Re(z_n)Im(z_n), so we can save ourselves a multiplication by just dividing by 64 instead (right shift by 6).

    Now, back to the jump. This is how we catch the sequences which go off to infinity – we’ll see where it goes later. In the C++ example, we used abs(z) to check whether we were inside an r=2 circle. But calculating an abs value is expensive. It requires us to square two numbers and sum them. But consider this: any point which leaves the r=2 circle will eventually go off to infinity. If I wanted to I could use an r=3 circle, or an r=1,000,000 circle – it would just potentially take more iterations. Any region which completely contains the r=2 circle will do the job. So how about a rectangle?

    My jump is conditional on 128Re(z_n)Im(z_n)=64Im(z_n^2) being greater than 32768/128 in magnitude (the signed 16-bit max, after scaling). That is, |Im(z_n^2)|>8

    So, provided that |Im(c)|<6 - which it always will be since we're only working with y in the range of -1 to 1 - this will contain the whole of the r=2 circle. Perfect. This block: 10 bytes, So far: 27 bytes [code];calculate Re(z^2)=x^2-y^2 mov bx,si ;we will work in the BX register for now add bx,bp ;bx contains x+y sub si,bp ;si contains x-y imul si,bx jo overflow sar si,7 ;si contains Re(z^2)[/code] areasAnd now to calculate Re(z_n^2). As noted previously, this is given by Re(z_n)^2-Im(z_n)^2. Earlier versions of my code calculated exactly this – multiplying each part by itself, then subtracting one from the other. Then I realised I was missing a trick here – I could factorise a^2-b^2=(a+b)(a-b), thus gaining an ADD (+2 bytes), but losing an IMUL (-3 bytes). A saving of 1 byte – score!

    After we’ve calculated Re(z_n)+Im(z_n) with the add bx,bp, we no longer need Re(z_n) (SI), so we save ourselves an MOV by clobbering it in the SUB instruction.

    Again, we check for overflow and SAR 7 to take out the factor of 128. Here, the overflow checks if |128Re(z_n^2)|>\frac{32768}{128}, ie |Re(z_n^2)|>4. Over the range of |Re(c)|<2 - exactly the space we're working in - this completely contains the r=2 circle. In fact, at the far left and right edges of the canvas, they just touch. An illustration is shown to the right: including the [latex]abs(z^2)<2[/latex] criterion in grey and the rectangular criterion we're using here in blue. The rectangle moves around relative to the axes depending on the point being calculated. This block: 14 bytes, So far: 41 bytes [code];calculate z'=z^2+c add si,di add cx,dx mov bp,cx ;do another iteration inc ah jno iterate[/code] Here are the final steps to calculate [latex]z_{n+1}[/latex]. We then increment the iteration counter, AH, and unless it overflows (passes 255), we go round again. Whee! 255 iterations is thoroughly excessive, but it allows us to just check overflow, rather than doing a CMP with an immediate value, which would cost 3 bytes. This block: 10 bytes, So far: 51 bytes [code];iterations are over inc al ; if we've gotten all this way, set the colour to white overflow:[/code] If we get here just by the natural progression of the program, that will be because we went through 255 iterations without overflowing. Therefore the pixel is in the set, so we set it to white - a value of 1. Since I know its value before is zero, I can use an INC (2 bytes) instead of a MOV immediate (3 bytes). If we get here via one of the jump on overflows, we skip out of this instruction and the pixel remains black. This block: 2 bytes, So far: 53 bytes [code];now write a pixel mov ah,0ch ; write pixel interrupt xor bh,bh mov cx,di add cx,320 add dx,175 int 10h sub dx,175[/code] Here we use another INT 10h interrupt to write the pixel to the screen. We set AH=0Ch for the "write a pixel" command, set BH=0 for page zero (using XOR- no size advantage over MOV immediate here, but faster), and stick the coordinates in CX and DX (in fact, I chose to use DX to store the y coordinate anyway, to save a MOV instruction here). Since our coordinate system was centred around zero, we add 320 and 175 to the x and y coordinates respectively, to centre the image on the screen. After the interrupt, we have to take the 175 back off DX, since it serves as our loop counter so we can't clobber it. All of these ADD and SUB immediates are VERY expensive instructions - each one takes 4 bytes. I haven't thought of a way to optimise this though. This block: 20 bytes, So far: 73 bytes [code];loop around, do the next pixel inc di cmp di,255 jne xloop ;or if we've gotten to here, draw the next row inc dx cmp dx,127 jne yloop[/code] And some loop logic. We go onto the next pixel in the line (inc DI), and loop around unless the pixel value's 255 (end of the line). This CMP literal is expensive (4 bytes) but not easily avoided - since it's a signed type (and we're only using the lower 9 bits anyway), we can't just test for overflow. Outside of this, we have the same structure to loop through lines. This block: 13 bytes, So far: 86 bytes [code]cli hlt db $-$$ ;tell us the length of the code times 510 - ($ - $$) db 0 dw 0xAA55[/code] And finally a little bit of boilerplate: clear BIOS interrupts and halt. The db $-$$ line is a little hack to make it easier to see the size of the program - it just inserts a byte of value (this memory location-memory location of program start). Then I just look at the code in xxd and the last byte before the long run of zeros is the size of the program. It needs to be padded up to 512 bytes (the size of a boot sector), with the magic word AA55h at the end to indicate that it's bootable. And so that's a final 2 bytes of executable code (I'm not counting the padding, of course), bringing us up to 88 bytes! The complete code is here: mandelbrot-bootsector.asm

    The result

    The code is compiled with nasm:

     $ nasm -o mandelbrot mandelbrot-bootsector.asm

    And run in qemu:

     $ qemu-system-i386 -video cirrus -fda mandelbrot -net none

    Here’s a screenshot and a video of it running:
    mandelbrotscreenshot

    Possible Optimisations

    I’ve no doubt this code could be made smaller by someone more skilled than I am. I can see a few areas for potential optimisation, but can’t think how to implement them without breaking other things. For example, when writing the pixels to the screen, there are 3 ADD/SUB immediate instructions. These take up 4 bytes each – if I could have the immediate values in a register instead, that would save two bytes per instruction. But alas, I’m already scraping the bottom of the barrel for registers (I tried using SP, but since BIOS interrupts apparently use the stack, that breaks things).

    Similarly, the CMP literals for the loop logic are pretty expensive. Avoiding literals is always desirable.

    Perhaps it’s possible to save operations by foregoing the BIOS interrupt to write pixels and just write to video memory instead? I haven’t looked into this at all.

    Dissolve Animation

    I made another version of my code which makes the set “dissolve” in. Essentially it uses an LFSR to cycle through pixel values, rather than a loop. I think it gives a pretty cool visual effect. I was going to write more on it, but this post is getting long enough already, so I’ll just leave the code and a video here if anyone’s interested. It’s not as heavily optimised as the code above.
    mandelbrot2-ver4.asm

    Conclusion

    This has turned out to be a very long blog post (my word count is saying over 2,700 words) but hey, I guess I had a lot to say. I had great fun learning assembly with this programming challenge, and I hope you’ve found my explanation interesting interesting too. If you can optimise it any more, I’d be interested to see your results, so do let me know in the comments!

Categories
Uncategorized

Sinclair C5 Indicators

A while back, I bought a Sinclair C5.

IMG_7577IMG_7582

I love it. It’s an amazing piece of history and something really unique. Sure, it’s not that practical so I don’t drive it every day, but it’s great fun to take out for a spin every now and then.

I’ve got various projects on the go to improve it, and one of them was making indicators for it.

The original indicators looked like this:

The front ones are fairly standard but the rear ones are specially made for the C5 – as you can see they fit flush against the curved edge. The problem is, these original indicators are very rare and expensive nowadays — they can easily fetch over 100GBP for a set.

So I set about making my own. My friend Eddie Tindall, very talented designer, recommended that Renault Megane indicators would go quite well with the style of the C5. So I bought some of those and pryed the plastic diffusers free from the backings. The glue was quite brittle, so they came off ok with a little bit of care.

The reason for doing this was that I wanted to use LEDs rather than the original incandescent lamps. I wanted to avoid retrofits though because I thought they probably wouldn’t be bright enough.

I hacked up a quick PCB in EAGLE CAD. It was really rough and dirty — the polygons were all hand drawn and had funny shaped gaps between them, etc. But in the spirit of my previous post, I didn’t want to bother spending ages getting it perfect. Once I was fairly happy the design was OK, I sent it off to Smart Prototyping in HK for manufacture. 18 days later, the 5 copies of the PCB arrived.

IMG_20140727_221302

Each board is for two indicators — one left and one right, since the two indicators are mirror images of each other. You’re looking at the front of one indicator PCB and the back of the other here. They needed to be guillotined in half along the silk screen diagonal line. The circles on the top right of that image (also present on the back at the bottom left) are my idea for thermal management: I calculated that if I soldered coins to these pads (1p coin for the circular ones, 2p coin cut in half for the semicircular ones), it would give the indicators enough thermal mass to keep cool in normal use. This is pretty essential, since the LEDs I was using would be dissipating quite a lot of power.

IMG_20140725_201029

Here’s one of the boards soldered up, from both sides:
IMG_20140727_222611IMG_20140727_222633

I couldn’t find a hacksaw, so I just blobbed a load of solder where the 2p halves should have been for now. It’s not quite as effective, but it might be good enough. It will run for quite a few seconds before the 1p coin becomes noticeably warm, but the solder blobs heat up a bit quicker. Soldering this was fairly easy since I have a ~90W iron, but would probably be impossible with a cheap maplin iron.

Here’s a picture of one working, hooked up to a 700mA constant current supply.
IMG_20140725_204652
I’m not entirely sure how to measure the brightness, but they were bright enough to give me spots in my vision. For comparison, the backdrop of that picture is in mid-evening british summer sunlight and the exposure was apparently f/2.8, ISO 50, 1/250 seconds.

I’m reasonably happy with how these turned out. I still need to work out how I’m going to fix the PCB to the diffuser (probably some sort of high temperature epoxy), and then fix them to the C5. But before I can do that, I need to get the rest of the electronics done — and that’s a job for another day.

Categories
Uncategorized

Perfect is the Enemy of Done

I’ve finally gotten round to re-hosting my blog — I was previously using Gandi‘s Dotclear installation which came free with my domain name but I got fed up of it because the comment spam filtering was useless and I found the markup difficult to use. The old blog is currently still hosted at https://tempblog.m0tei.co.uk but may move in the near future. This blog is now hosted by the lovely people at the SRCF (full disclosure: I’m on the committee of the SRCF).

I’ve been intending to re-host it for a while but I spent a long time deciding what platform to use. I ruled out wordpress early on because it was too bulky and slow — PHP and MySQL, really!? For just a blog!? I tried a few static content generators but ruled those out for some reason or another too, and I eventually decided that this was a job for later, when I could find time to do it perfectly.

And that’s the problem. Recently, I realised that a lot of my projects end up unfinished because I can’t do them perfectly. My bench frequency reference, my reprap, my Sinclair C5 indicators (I’ve bought a Sinclair C5 since my last post — more on that another time) and various programming projects — all ended up coming to a stall. Or if for some reason I absolutely have to finish something, I tell myself that there’s no point in doing it just alright — if it’s not going to be perfect anyway, I may as well put the absolute bare minimum effort in and do a horrific bodge of it.

But enough is enough. As they say, perfect is the enemy of done. Over the last week or two I’ve been trying to get myself into that mindset. I’m going to start finishing some projects which have dragged on for too long. I may not be able to do them perfectly but that’s fine – I’ll at least get them done. First up was migrating my blog. WordPress may not be perfect, but it will get the job done. So I’ve used that and now the job is done.

To help me keep this in mind, I made a motivational poster to go on my wall — I took a piece of scrap paper, scrawled “PERFECT IS THE ENEMY OF DONE” on it and taped it up in front of me with parcel tape.

IMG_20140705_185134

It’s not perfect, but hey — it’s done.