So, Sonic Forces was released this week... saw a bunch of new images suddenly on the web:
And of course, a new one on Sonic channel (sonic.sega.jp)
ST Microelectronics, to their credit, provide everything you could ever need for development on their line of processors. However, when it comes to a specific aspect of the hardware, they tend to abstract it to the nth degree.
In many applications it is acceptable to use the system FLASH i.e. unused areas of it, to store data. This is where EEPROM emulation comes in. Now, ST provide example code on how to do this- delivered via their STM32Cube suite. There is just one problem. Whomever wrote that code was a very big fan of abstraction, to the extent my head hurts and I couldn't get it to work. Compared to the STM32_StdPeriph_Library they provided, this was totally like reading C++ code where the developer went full retard with OOP. So, following the KISS principle, here is how I did it.
First things first
Knowledge of what you are doing is critical here. If you get it wrong, you will see the CPU throw hard fault exceptions in the debugger. Additionally, any source code presented here was only tested on the STM32F072RBT8 processor, and will require modification for other processors in the same family.
So, we know our application will fit into the 128k FLASH with space to spare. How we arrive at that is obvious. So we look in the reference manual to see where the FLASH resides:
So, the FLASH memory begins at 0x08000000 and continues upwards for 128k. Further in the reference manual we see a really important table, that gives us exactly where the last block of FLASH resides:
So from the above, page 63 is the last page in FLASH. Now we need to consider a few basic things about FLASH.
0x0801F800 through 0x0801FFFF
Making sure we don't use it
Now that we know where we will emulate EEPROM, a few modifications to the project's linker script is in order.
In the .ld file we find text that describes where stuff is allocated. Below is my modifications to a standard linker script that is generated by Atollic Truestudio:
The actual code
Couldn't be simpler than this: (eeprom.c)
I am pleased to say that, the tide is turning, albeit very very very very slowly over at the Ministry. This comment on DISQUS in response to this article, made my day:
Update: 9 October 2017
I had a feeling so I went to check... Jose's post has been removed. In addition the pussies banned the user account I used to comment on that post. So banning by association it seems...
The Digital Audio interface on the HDM01 was straightened out this weekend. Basically the solution to ensuring it doesn't affect the CPU was to use a circular buffer and a DMA channel. Using this arrangement, the byte order in the buffer is pretty much guaranteed to be correct. This is important. The DMA also ensures that there is only an interrupt every 8 stereo audio samples i.e. when the circular buffer is about to wrap-around.
Here are some details on audio formats on the desktop PC:
The above figure shows the relationship between what is sent on the I2S bus, from the source, in this case, a normal .WAV file.
With the STM32F0xx set up to use DMA on I2S interface 1, we find the following format of data appearing in the circular buffer:
In order to translate this buffered audio data to a VU meter on a graphic display, we need to delve into the PCM. The following is what I figured out from playing around with Cool Edit / Adobe Audition-
The values are, quite obviously, signed 2's complement. To translate this to VU display, we need to implement, in the digital domain, the method used for driving an analog VU meter. This used to be done typically with an amplifier and a full-wave rectifier consisting of germanium diodes.
First order of business is to convert the negative part of the signal, into a positive one, so that we're essentially doing full-wave rectification without a smoothing capacitor:
Sample Processing Code
The above code then means we have positive integers (unsigned) for the signal we receive, and since we only sample the level every 8 audio samples, it is technically damped. However this is not the entire story. More about this in the next instalment.
As one can imagine, the media, especially our good friends at Mybroadband are all on the bandwagon about IoT (Internet of Things). Well, I take a pretty dim view at this reinvention of the wheel, but now is the time to put the career liars aka journalists in their place about this stuff.
What is all the hype about?
LOLWUT, we've had little things on the internet as far back as 2001, when cellular modem modules became easily available and GPRS was in place. Just don't educate the journalists on that one OK, they don't like their bubble to be burst. But let's move on shall we...
First we saw Zigbee in 2005, which was supposed to be something along these lines.It was also hyped to the extreme, but turned out to be not so cool after all with its proprietary and complicated stack and demands on RF design expertise (it operates in the 2.4GHz band, smack bang in the middle of the Wi-Fi bands).
SigFox? More like SigFox us yor monise and we might allow you to play... Merci beaucoup!
Fast forward to 2011, and we have these French twits, SigFox, who developed their own proprietary protocol using the ISM bands (which, as it is, are already crowded in South Africa with every man and his dog and their remote controllers for security gates and alarms) and they then relay these small packets to a cellular network. Clever idea that, to piggyback on the machine of the greedy money hungry cellular networks.
However, as it would turn out, its not that simple. I recently took on a client who had a good product idea to use this network, and guess what, we were met with a roadblock. SigFox wanted money, pretty much the same setup that the USB Implementer's Forum run, where you need to pay money for a Vendor ID- that magical hexadecimal number that allows your spot in the sun on the USB bus. So yeah, what MyBroadband won't tell you is the following facts:
It is as easy as chips to do this using off-the-shelf parts, a few good ISM band radios from TI or Silicon Labs, and a nice base station with a good modem using a private APN, as has been done for a long time by the alarm system companies.
I am in the wrong business I tell you.. I should also make money off the general stupidity of the general public.
So I have been hard at work trying to get my headphone amplifier finished. One of the design ideas was to send the digital audio in pass-through fashion via the CPU. However the fact that the STM32F072RBT8 cannot generate the exact sampling rates (there is a percent of error) I decided to make it "listen" to the I2S bus instead. This is done in order that I can do stuff, like a level display, or a spectrogram. It also puts to rest, any possible accusations from audiophiles that I am "fiddling" with the audio and there cannot be any jitter whatsoever.
It was not difficult to get the digital audio up and running, however, few appreciate how much work it is for a CPU to do processing on audio. I am presently reading in all 24 bits at 48kHz sampling rate, and right off the bat we are seeing the CPU take strain.
Therefore, there is only one thing to be done, DMA (direct-memory-access) to the rescue. That means, chucking all we receive, into a circular buffer, and processing that at leisure, i.e. computing the FFT for the spectrogram and the dBm for level (VU) display. The DMA has all the necessary logic to hold off the ARM CPU and all that shit that makes my brain hurt trying to think about it. More about my success (or failure) with that in an upcoming blog post.
With DMA it would be then possible to pass-through all the audio, but alas, the STM32F072 cannot generate accurate master clocks (the percentage error is rather high at 192kHz). While I am sure this will not be seen/heard except with insanely expensive Audio Precision test gear, I do not want to even entertain the idea of that- don't want to provide the fuel for audiophile debates which in turn would condemn this project before it even gets a chance in the market.
Interesting times during debugging
Without fancy (aka fucking expensive) test equipment, I needed to get creative to test the interfaces. And I did that, using rudimentary tools. The three applications I used (in conjunction with my PC sound card's OPTICAL OUTPUT) were as follows:
Then, I opened it for editing in WinHEX, and was greeted by the familiar wav file formatting. Using the reference given here I found where the audio samples began and ended hence, the places where I could do stuff.
Using the feature in WinHEX I marked the start and end positions of the audio data, then used the tool provided in the application to fill each sample with 0x61BA03. This is a unique-enough number to be spotted at a glance in the debugger.
Then, the next step was to use something to play that back, preferably in a loop for debugging. I tried a few programs and found they all did something to the audio i.e. dithering or filtering. I then tried VLC and found that it output the file "as-is", well nearly anyway.
I was perplexed because I was still seeing dithering with VLC, then I remembered I set up my STM32F072 to receive 24-bit audio, and when I checked my output setting, it was set to 16-bit mode- DAMN!
After setting it to 24-bit 48kHz, I got the exact data in the .wav file, being streamed into the microcontroller. YAY! This also proves to the snake-oil consumers (aka audiophiles) that VLC is suitable as a reference media player, its properly designed and whatever is in the .wav file is sent to the D/A converters verbatim! I proved it.
So, this opens up an interesting idea or concept. It proves that .wav files can be used to transmit arbitrary data. I found out later that this is indeed how ONKYO do firmware upgrades on their AV receivers. They give the repair shops a CD or .wav file to play back with a regular CD player or VLC on a laptop, and boom- the new firmware is installed that way including updates to DRM and HDCP keys.
So what to do now about the HDM01
Well, as mentioned, we're going to DMA our way into a solution, and then simply sample the data at a slow rate. This is why DSP chips, are still the first choice for signal processing and why A/V receivers have a DSP, typically an Analog SHARC or Blackfin, in conjunction with a master microprocessor.