The World of Sonic2k
Menu

Using STM32F Flash as EEPROM- the easy way!

13/10/2017

Comments

 
ST Microelectronics, to their credit, provide everything you could ever need for development on their line of processors. However, when it comes to a specific aspect of the hardware, they tend to abstract it to the nth degree.

In many applications it is acceptable to use the system FLASH i.e. unused areas of it, to store data. This is where EEPROM emulation comes in. Now, ST provide example code on how to do this- delivered via their STM32Cube suite. There is just one problem. Whomever wrote that code was a very big fan of abstraction, to the extent my head hurts and I couldn't get it to work. Compared to the STM32_StdPeriph_Library they provided, this was totally like reading C++ code where the developer went full retard with OOP. So, following the KISS principle, here is how I did it.

First things first
Knowledge of what you are doing is critical here. If you get it wrong, you will see the CPU throw hard fault exceptions in the debugger. Additionally, any source code presented here was only tested on the STM32F072RBT8 processor, and will require modification for other processors in the same family.

So, we know our application will fit into the 128k FLASH with space to spare. How we arrive at that is obvious. So we look in the reference manual to see where the FLASH resides:
Picture
So, the FLASH memory begins at 0x08000000 and continues upwards for 128k. Further in the reference manual we see a really important table, that gives us exactly where the last block of FLASH resides:

Picture
So from the above, page 63 is the last page in FLASH. Now we need to consider a few basic things about FLASH.
  • Most all FLASH memory implementations work in pages. This is no different. We want to keep to a page to ensure we work only within a page and don't mess things up. It also means we can erase just that page and not run the risk of deleting the rest of the FLASH (and losing our entire code image). We can also protect the page from erasure and modification
  • The FLASH on this processor will obviously be 32-bit words. Writing bytes to that will mean alignment needs to be carefully considered. This may, or may not be an ideal situation. In my implementation I read the entire word from the FLASH, modify one of the four bytes in the word, and write it all back.
Therefore, we wish to target this area in the FLASH for EEPROM emulation:
0x0801F800 through 0x0801FFFF

Making sure we don't use it
Now that we know where we will emulate EEPROM, a few modifications to the project's linker script is in order.
In the .ld file we find text that describes where stuff is allocated. Below is my modifications to a standard linker script that is generated by Atollic Truestudio:

    
The actual code
Couldn't be simpler than this: (eeprom.c)
eeprom.c

    
Comments

Digital Audio - Part Three

1/10/2017

Comments

 
The Digital Audio interface on the HDM01 was straightened out this weekend. Basically the solution to ensuring it doesn't affect the CPU was to use a circular buffer and a DMA channel. Using this arrangement, the byte order in the buffer is pretty much guaranteed to be correct. This is important. The DMA also ensures that there is only an interrupt every 8 stereo audio samples i.e. when the circular buffer is about to wrap-around.

Here are some details on audio formats on the desktop PC:
Picture
The above figure shows the relationship between what is sent on the I2S bus, from the source, in this case, a normal .WAV file.

With the STM32F0xx set up to use DMA on I2S interface 1, we find the following format of data appearing in the circular buffer:
Picture
Next Steps
In order to translate this buffered audio data to a VU meter on a graphic display, we need to delve into the PCM. The following is what I figured out from playing around with Cool Edit / Adobe Audition-
Picture
The values are, quite obviously, signed 2's complement. To translate this to VU display, we need to implement, in the digital domain, the method used for driving an analog VU meter. This used to be done typically with an amplifier and a full-wave rectifier consisting of germanium diodes.

First order of business is to convert the negative part of the signal, into a positive one, so that we're essentially doing full-wave rectification without a smoothing capacitor:
Sample Processing Code

    
The above code then means we have positive integers (unsigned) for the signal we receive, and since we only sample the level every 8 audio samples, it is technically damped. However this is not the entire story. More about this in the next instalment.
Comments

The Truth about IoT, especially SigFox

28/9/2017

Comments

 
As one can imagine, the media, especially our good friends at Mybroadband are all on the bandwagon about IoT (Internet of Things). Well, I take a pretty dim view at this reinvention of the wheel, but now is the time to put the career liars aka journalists in their place about this stuff.

What is all the hype about?
LOLWUT, we've had little things on the internet as far back as 2001, when cellular modem modules became easily available and GPRS was in place. Just don't educate the journalists on that one OK, they don't like their bubble to be burst. But let's move on shall we...

First we saw Zigbee in 2005, which was supposed to be something along these lines.It was also hyped to the extreme, but turned out to be not so cool after all with its proprietary and complicated stack and demands on RF design expertise (it operates in the 2.4GHz band, smack bang in the middle of the Wi-Fi bands).

SigFox? More like SigFox us yor monise and we might allow you to play... Merci beaucoup!
Fast forward to 2011, and we have these French twits, SigFox, who developed their own proprietary protocol using the ISM bands (which, as it is, are already crowded in South Africa with every man and his dog and their remote controllers for security gates and alarms) and they then relay these small packets to a cellular network. Clever idea that, to piggyback on the machine of the greedy money hungry cellular networks.

However, as it would turn out, its not that simple. I recently took on a client who had a good product idea to use this network, and guess what, we were met with a roadblock. SigFox wanted money, pretty much the same setup that the USB Implementer's Forum run, where you need to pay money for a Vendor ID- that magical hexadecimal number that allows your spot in the sun on the USB bus. So yeah, what MyBroadband won't tell you is the following facts:
  • Vodacom is talking through their arse when they say they've connected 1.3 million devices on their network. Oh wait, paid for cock-size strut by MyBroadband, don't bother reading that article.
  • Its not as simple as they paint it to be. In reality you're going to be using some third party's device to connect to the network, who has the network ID hard coded. Genius, so now you can be tracked too by the NSA! If you want to do your own thing, you're going to have to wine and dine the Frenchies at SigFox HQ in Paris with some moolah to get the unique network identifier you need to work on this network.
  • This is being punted as a nice way of doing some simple stuff. Reality is, the bandwidth is pretty shit! If you want good, real-time performance, get yourself a SIM card, a data bundle and a nice modem from Sierra Wireless or uBlox and put this network to shame.
  • The protocol is proprietary- pretty much no-thank-you from me. Open Source rules the roost!

It is as easy as chips to do this using off-the-shelf parts, a few good ISM band radios from TI or Silicon Labs, and a nice base station with a good modem using a private APN, as has been done for a long time by the alarm system companies.

I am in the wrong business I tell you.. I should also make money off the general stupidity of the general public.
Comments

Digital Audio Part Two

25/9/2017

Comments

 
So I have been hard at work trying to get my headphone amplifier finished. One of the design ideas was to send the digital audio in pass-through fashion via the CPU. However the fact that the STM32F072RBT8 cannot generate the exact sampling rates (there is a percent of error) I decided to make it "listen" to the I2S bus instead. This is done in order that I can do stuff, like a level display, or a spectrogram. It also puts to rest, any possible accusations from audiophiles that I am "fiddling" with the audio and there cannot be any jitter whatsoever.

It was not difficult to get the digital audio up and running, however, few appreciate how much work it is for a CPU to do processing on audio. I am presently reading in all 24 bits at 48kHz sampling rate, and right off the bat we are seeing the CPU take strain.
  • Display is "tearing" - I am particularly upset about that.
  • Response to inputs is painfully slow including the IR remote.
The other problems include, but not limited to
  • ​Getting the word order correct when the words are > 16 bits. We have to do frame alignment.

Therefore, there is only one thing to be done, DMA (direct-memory-access) to the rescue. That means, chucking all we receive, into a circular buffer, and processing that at leisure, i.e. computing the FFT for the spectrogram and the dBm for level (VU) display. The DMA has all the necessary logic to hold off the ARM CPU and all that shit that makes my brain hurt trying to think about it. More about my success (or failure) with that in an upcoming blog post.

With DMA it would be then possible to pass-through all the audio, but alas, the STM32F072 cannot generate accurate master clocks (the percentage error is rather high at 192kHz). While I am sure this will not be seen/heard except with insanely expensive Audio Precision test gear, I do not want to even entertain the idea of that- don't want to provide the fuel for audiophile debates which in turn would condemn this project before it even gets a chance in the market.

Interesting times during debugging
Without fancy (aka fucking expensive) test equipment, I needed to get creative to test the interfaces. And I did that, using rudimentary tools. The three applications I used (in conjunction with my PC sound card's OPTICAL OUTPUT) were as follows:
  • Adobe Audition (trial version is fine)
  • WinHEX
  • VLC Player
First order of business was to create a dead normal .WAV file, at 24bits, 48kHz resolution. I created one around 10s in length. I filled that entire file with silence using the tools provided in Adobe Audition, then saved it somewhere.

Then, I opened it for editing in WinHEX, and was greeted by the familiar wav file formatting. Using the reference given here I found where the audio samples began and ended hence, the places where I could do stuff.
Picture
Using the feature in WinHEX I marked the start and end positions of the audio data, then used the tool provided in the application to fill each sample with 0x61BA03. This is a unique-enough number to be spotted at a glance in the debugger.

Then, the next step was to use something to play that back, preferably in a loop for debugging. I tried a few programs and found they all did something to the audio i.e. dithering or filtering. I then tried VLC and found that it output the file "as-is", well nearly anyway.

I was perplexed because I was still seeing dithering with VLC, then I remembered I set up my STM32F072 to receive 24-bit audio, and when I checked my output setting, it was set to 16-bit mode- DAMN!
Picture
After setting it to 24-bit 48kHz, I got the exact data in the .wav file, being streamed into the microcontroller. YAY! This also proves to the snake-oil consumers (aka audiophiles) that VLC is suitable as a reference media player, its properly designed and whatever is in the .wav file is sent to the D/A converters verbatim! I proved it.

So, this opens up an interesting idea or concept. It proves that .wav files can be used to transmit arbitrary data. I found out later that this is indeed how ONKYO do firmware upgrades on their AV receivers. They give the repair shops a CD or .wav file to play back with a regular CD player or VLC on a laptop, and boom- the new firmware is installed that way including updates to DRM and HDCP keys.

So what to do now about the HDM01
Well, as mentioned, we're going to DMA our way into a solution, and then simply sample the data at a slow rate. This is why DSP chips, are still the first choice for signal processing and why A/V receivers have a DSP, typically an Analog SHARC or Blackfin, in conjunction with a master microprocessor.
Comments

Digital Audio

25/9/2017

Comments

 
The design work on the Alpha-X HDM01 is progressing well. This past weekend I did the necessary to get the digital audio path up and running. Even though many books and other educational resources tend to steer clear of this subject, its really not that difficult or mysterious.

The screenshot below is what it looks like when audio is played back, with a sample rate of 48kHz at 24 bits. Indeed even my humble oscilloscope (25MHz) is fast enough to capture this in real time. The upper trace is LRCK, and the lower trace is the PCM data, all 24 bits of it.
Picture
The above was captured during playback of this album (track 1):
Picture
A note for the audiophiles reading this blog post:
The LRCK is measured by the 'scope as 47.98kHz. This is NOT an accurate measurement, the Rigol oscilloscope never excelled at frequency measurement hence why we use a dedicated frequency counter. So there is no argument here for "jitter". At any rate, this system uses oversampling, we cannot reasonably expect to install non-oversampling D/A converters in this system, because a) there is no space, and b) that would raise the cost of this system into the stratosphere, and c) such devices are not really offered anymore due to cost and low demand (Surprise- semiconductor companies are businesses too, complete with suits and corporate greed!)

A note about jitter in digital audio:
In digital audio, jitter refers to the deviation of the clock source from the ideal, or nominal rate. In the early days of digital audio this was a fuck-up! Jitter meant the D/A converter began to do "stuff" it wasn't supposed to, leading to quantization noise and other "artifacts" that one might have experience with when cueing or mucking about with heavily compressed MPEG sources.

In systems such as these, especially in 2017, the best D/A converters in the world are of the Sigma-Delta modulated type, which is, by virtue of its inherent design, largely unaffected by jitter. The sampling rate is insanely high, rendering any harmonics and intermodulation products virtually non-existent in the audio band. Indeed, the low-pass filter required for the D/A converter chip is a simple pi-filter. No need to go the whole hog with Butterworth or Chebyshev, raising the cost because of the need of expensive WIMA capacitors and E96 series resistors.

Also, we have now got the benefit of some 30+ years of DIR (digital interface receiver) design behind us, meaning we have clever electronics that can tolerate an insane amount of distortion in the digital audio path and automagically a beautifully stable and jitter-free clock is obtained.

It is my belief that many audiophiles are wasting a lot of their money pursuing snake oil with fancy gold plated S/PDIF cables and other contrived items. In this exercise I have managed to obtain the sound I wanted, that often-elusive studio quality, on a budget, and on a fucking breadboard! The difference is that I know what I am doing!
Comments

IR Remote Controls and their fangled Protocols - Part One

31/8/2017

Comments

 
A while ago, I worked on decoding some of those infra-red DSTV decoder remotes (the ones you find for 100 bucks at almost every supermarket) to see what is cooking there and whether the remote could be used for other applications. Well yes, they speak standard protocols.

Unfortunately, the code I did for that is now lost :( [hard disk failure], but I have recovered enough to be able to document everything again.

I am designing a system for my Alpha-X kit so I am looking closely at protocols. The most widely used one is the NEC protocol, and until recently I didn't even know it had a predecessor.

So in looking at a remote control, I know that, I will have to roll my own (I don't think it would be good form to use MonoChoice's remote on my product- wouldn't go down too well would it? Probably be on the business end of their lawyers).  So I am going to have to design my own handset from scratch.

As a starting point I began to look at the venerable, old µPD1986 (incidentally used in the remote for those ancient, analogue M-NET decoders). I happen to have one of the chips, so I hooked it up on a breadboard like this:

Picture
Now, the above rudimentary "remote" actually works, except there is one problem, it transmits a small frame, and according to the datasheet, its basically a function of the key matrix:
Picture
So all we have is a 5 bit code, and no error correction or any way of detecting an error. That kind of explains why those decoder boxes were so susceptible to interference from other remote controls. The basic fact here is that this is not going to fly, so we need to implement the current NEC protocol.

A fun fact: This simple frame is 9mS in duration, which means it fits exactly into the start pulse of the current NEC protocol. That is not a coincidence!

Before we move on to the current protocol, let's look at how the µPD1986 actually achieves the battery life it does (typically years of operation from a set of AAA cells)
  • The oscillator is not running, it only runs when a key is pressed.
  • The key matrix lines KI0..KI3 are inputs, any high logic level here triggers the oscillator to start up.
  • When the oscillator is triggered to start, scanning is achieved via pulses output on K0..K6. The pulses are positioned so that when a KIx input pin meets a Kx pin, it can decode the position. Applying Vcc to the KI pins starts up the oscillator, but no transmission is generated (obviously).
Conclusion: The part is static, until a key is pressed. Typical CMOS ICs consume microamperes when static i.e. not being clocked. This is true for most, if not all CMOS ICs, especially those in the CD4000 family (with exceptions).

The current NEC protocol
The NEC protocol currently in use looks like this:
Picture
So the above, is fairly self-explanatory, and it is apparent that four bytes are being transmitted. This is the 'expanded' version of the protocol, virtually in use everywhere, even on the Apple TV remote.
  • The first two bytes is the Address, and its inverse
  • The last two bytes is the Command and its inverse
The purpose of inversion (1's complement) is to enable rudimentary error detection.

Now, with the proliferation of products on the market, the address field is expanded to use both bytes i.e. the full 16 bits, and so forms a customer code. Indeed some manufacturers publish their codes in the public domain, most don't though.

The Command bytes however, are vendor dependent. Some manufacturers use the above i.e. Command, Command Inverse, others use their own schemes.

The biggest problem of course is that NOBODY can tell me, nor can I find online, a comprehensive list of customer codes assigned by NEC/Renesas, nor can I find information to apply for such a code formally.

Because of this, and to ensure we don't get interference, I am going to "borrow" off of MonoChoice's code, and implement my own thing. In this way my remote codes would be unique enough to not clash with anyone's equipment, now or in the future. Details to follow in the next part!
Comments

A neat SPI engine in Verilog

20/8/2017

Comments

 
I have need to use programmable logic in the Alpha-X project and it was the perfect opportunity to learn some kind of HDL. I tried VHDL, but, I much prefer the Verilog variant, as it is the closest thing there is to ANSI C, and henceforth will use that rather.

The following design is beautifully simple and works every time, a simple SPI engine that uses two SPI byte transfers: [command] [data]

This means each transaction is two bytes, the first being a command code, the second being the data (if specified by the command)

For each command, an ACK or NAK is returned. And we have an asynchronous NRST input to reset the device when we boot the micro so that we initialize to a known state. The source for this module can be obtained here

spi.v

    
Comments

Rotary Encoder Decoding, with VHDL of course :)

1/8/2017

Comments

 
Here is the simple, easy as pie, VHDL implementation of a decoder for those rotary encoders that are replacing volume pots in everything from microwave ovens to home theatre- I basically took an example intended for one of Xilinx's Spartan FPGA devices and modified it to my needs: Enjoy!
----------------------------------------------------------------------------------------------------
-- rotary_decoder.vhd
-- A proven, decoder solution in VHDL for rotary encoders (Bourns type)
-- Based on the Xilinx Spartan -3E example project and modified for taste
-- Removed LED drivers and made this a module that is intended to assert an IRQ on a microcontroller
-- Minimum clock for sampling = 500Hz
--
-- Using the Peter Alfke technique
-- Design concept by the late Peter Alfke- Xilinx (1931 - 2011)
-- Modified to taste to be portable on all Xilinx products (including the XC9536 as a minimum)
--
--                  (c) 2017 That Blue Hedgehog
--                              ______
--                       _.-*'"      "`*-._
--                 _.-*'                  `*-._
--              .-'                            `-.
--   /`-.    .-'                  _.              `-.
-- :    `..'                  .-'_ .                `.
-- |    .'                 .-'_.' \ .                 \
-- |   /                 .' .*     ;               .-'"
-- :   L                    `.     | ;          .-'
--   \.' `*.          .-*"*-.  `.   ; |        .'
--   /      \        '       `.  `-'  ;      .'
-- : .'"`.  .       .-*'`*-.  \     .      (_
-- |              .'        \  .             `*-.
-- |.     .      /           ;                   `-.
-- :    db      '       d$b  |                      `-.
-- .   :PT;.   '       :P"T; :                         `.
-- :   :bd;   '        :b_d; :                           \
-- |   :$$; `'         :$$$; |                            \
-- |    TP  ;             T$P  '                             ;
-- :                        /.-*'"`.                       |
-- .sdP^T$bs.               /'       \
-- $$$._.$$$$b.--._      _.'   .--.   ;
-- `*$$$$$$P*'     `*--*'     '  / \  :
--    \                        .'   ; ;
--     `.                  _.-'    ' /
--       `*-.                      .'
--           `*-._            _.-*'
--                `*=--..--=*'
--
--
----------------------------------------------------------------------------------------------------
--
-- Library declarations
--
-- Standard IEEE libraries
--
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.STD_LOGIC_ARITH.ALL;
use IEEE.STD_LOGIC_UNSIGNED.ALL;
--
------------------------------------------------------------------------------------
--
--

entity rotary_decoder is
   
Port (          detent : out std_logic;
     dir :
out std_logic;
rotary_a :
in std_logic;
                  rotary_b :
in std_logic;
                       clk :
in std_logic);
   
end rotary_decoder;
--
------------------------------------------------------------------------------------
-- Architecture of module
------------------------------------------------------------------------------------

architecture Behavioral of rotary_decoder is
--
--
-- Module I/O
--

signal      rotary_a_in : std_logic;
signal      rotary_b_in : std_logic;
signal        rotary_in : std_logic_vector(1
downto 0);
signal        rotary_q1 : std_logic;
signal        rotary_q2 : std_logic;
signal  delay_rotary_q1 : std_logic;
signal     rotary_event : std_logic;
signal      rotary_left : std_logic;
--
-- Internal buffering to guarantee glitch-free output
-- Important if this will fire an IRQ handler on a very fast processor
--

signal internal_detent : std_logic;
signal internal_direction : std_logic;
--
--
--
-----------------------------------------------------------------------------------
-- Circuitry in module
-----------------------------------------------------------------------------------

begin

-------------------------------------------
  -- Filter / Debounce
  ------------------------------------------


  rotary_filter: process(clk)
  begin
-- Sample the pha, phb when clock event occurs
    if clk'event and clk='1' then
      rotary_a_in <= rotary_a;
      rotary_b_in <= rotary_b;

     
--concat enate rotary input signals to form vector for case construct.
      rotary_in <= rotary_b_in & rotary_a_in;

      case rotary_in is

        when "00" => rotary_q1 <= '0';        
                     rotary_q2 <= rotary_q2;

        when "01" => rotary_q1 <= rotary_q1;
                     rotary_q2 <= '0';

        when "10" => rotary_q1 <= rotary_q1;
                     rotary_q2 <= '1';

        when "11" => rotary_q1 <= '1';
                     rotary_q2 <= rotary_q2;

        when others => rotary_q1 <= rotary_q1;
                       rotary_q2 <= rotary_q2;
      end case;

    end if;
  end process rotary_filter;
 
  --
  -- The rising edges of 'rotary_q1' indicate that a rotation has occurred and the
  -- state of 'rotary_q2' at that time will indicate the direction.


  direction: process(clk)
  begin
    if clk'event and clk='1' then

      delay_rotary_q1 <= rotary_q1;
      if r otary_q1='1' and delay_rotary_q1='0' then
        rotary_left <= rotary_q2;
  rotary_event <= '1';
       else
        rotary_left <= rotary_left;
  rotary_event <= '0';
      end if;

    end if;
  end process direction;
 
  ---------------------------------------------------------------------------------------------------
  -- Output Signals to microcontroller or other Top-level modules
  ---------------------------------------------------------------------------------------------------

 
  output_signaling: process(clk)
  begin
if clk'event and clk='1' then
if rotary_event='1' then
if rotary_left='1' then
internal_direction <= '1';
else
internal_direction <= '0';
end if;
internal_detent <= '1';
-- Double buffering is used here to ensure the output is glitch-free (we're going into the IRQ pin of an ARM Cortex)
else
                  internal_detent <='0';
end if;
detent <= internal_detent;
-- detent is updated AFTER direction is stable
    end if;

  end process output_signaling;
  dir <= internal_direction;
-- direction is updated BEFORE detent
  --
  --

end Behavioral;

-----------------------------------------------------------------------------------------------------
--
-- END OF FILE rotary_decoder.vhd
--
-----------------------------------------------------------------------------------------------------
Comments

Jungo Windriver Sucks!

23/7/2017

Comments

 
So this weekend I wanted to do some work in VHDL and discovered my programmer doesn't want to work.
Some digging gave me the error message. Apparently when I installed AVRStudio, that over-wrote the Windriver package for it. So on this site I found some kind of guide to fix this issue.

Fixing the issue involves these steps from the site above:

If iMPACT fails to connect with cable in Boundary scan when we Initialize chain and gives the message: WARNING: iMPACT:923 -  Can not find cable, check cable setup! the problem is probably with Jungo drivers and windrvr6.sys. if we search in Console we will find the message: Driver windrvr6.sys version = 11.5.0.0. WinDriver v11.5.0 Jungo Connectivity (c) 1997 - 2014 Build Date: Jan 26 2014 x86_64 64bit SYS 13:30:18, version = 1150. Invalid device driver license. To solve this problem you need to do the following:
  • Uninstall Jungo WinDriver from Device Manager.
  • Delete windrvr6.sys from c:\windows\system32\drivers.
  • Excecute the install_driver.exe (might need administrator rights) from c:\Xilinx\14.7\ISE_DS\ISE\bin\nt64. (To do this from Command Prompt follow the instruction bellow.)
right click on Start
click on Command Prompt (Admin)
type: cd\
press [Enter]
type: cd Xilinx\14.7\ISE_DS\ISE\bin\nt64
press [Enter]
type: install_drivers.exe
press [Enter]


Picture

Upon completing this step, enjoy your system- the drivers are fixed and things work again
Picture
Comments

Sonikku no fonto desu ka?

8/7/2017

Comments

 
In a universe, far away, in the year 2004, a blue hedgehog was a pretty good assembly language programmer. At home, tinkering with a LCD module on a weekend, he got it to work in graphics mode:
Picture
For this same LCD, which I had plans for, in a digital intercom- I took the font I saw in Windows XP (back then I had been using it for just over 18 months). Using my shiny new (at the time) AMD Athlon box I created the font the way I did it back then, in MS Paint, and wrote the pixel patterns down, then typed that all up in assembler running on a Freescale HC08 chip.

Here are the unedited, original files created around that time from the repository:
Picture
So what was this actually about?
Back in the day, I operated with naivete about the typical companies I worked for, especially in the Detroit knockoff known as Gauteng. This was an idea I had at home- a digital intercom. I offered to them later that year as a product concept- sadly I got rebuffed, no big deal. However I kept the font and used it as a demo for an LCD they were given as a sample on their trip to Hong Kong somewhere towards the end of 2004. So naturally to make things work and prove concepts I used what I had, and that happened to be this font.

Its a standard font in the public domain, so my thinking has always been that my "embedded" version is fine for GNU uses. However since I have been accused by this particular company at various times of "stealing their IP" in the past (read: they were butthurt that I went to work for one of their most ardent competitors) I have been reluctant to use this font again. However, since I note that the lead accuser has emigrated to Australia, and they have placed every single bit of code I ever wrote, into FILE #13, I have felt that its time to use the effort I once expended to create this, again.

I do this because:
  • I don't give a fuck anymore- they can try and make some accusations again but I do think DH has long since been happy with achieving his devious end, he became director, a position he craved from the git go at Daddy's company, the primary goal (the secondary goal was always to chase skirt/dip pen in company ink/date chicks at the office)
  • Since the font was created at home (in my parent's home), and the contract clause in my employment contract is long since unenforceable, they can go suck a cock for all I care.
  • From the word go, on all those files, my name existed. The other party involved was actually my dear old brother and he has long since made his end "GNU General Public License"

So, its been a while, but a good font never gets old. My original Trebuchet, ported to Alpha-X's display driver code, on a STM32F052 ARM Cortex M0, lives on:
Picture

Around the time of this work I did originally, I was enjoying this:
Picture
Comments
<<Previous

    Author

    Sonic2k

    Archives

    January 2019
    November 2018
    October 2018
    August 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    October 2016
    August 2016
    July 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015

    RSS Feed

    Categories

    All
    Anime
    Art (Furry/Anthro)
    Bad Businesses
    CD Player
    Coding
    Controversy
    Embedded Coding
    Furry Fandom
    Gaming Industry
    Graphic Design And Publishing
    MyBroadband
    Online Bully
    Random
    Software Development

Powered by Create your own unique website with customizable templates.
  • Blog
  • About
  • Contact
  • Blog
  • About
  • Contact