Wednesday 28 February 2018

“Failed To Program Device” using PIC micros — 10 tips

If you work with PIC microcontrollers, you almost certainly encounter this problem. A seemingly innocent setup that worked fine yesterday suddenly refuses to program your PIC.

The cause of this issue varies a lot, but personally I blame the drivers controlling the hardware programmer. Some of these “fixes” really shouldn’t be necessary.

For reference, I am using a REAL-ICE programmer targeting a PIC18F, but I don’t think it matters much.

Here are 10 things to try which have worked for me:
  1. “Reset” the hardware programmer in your project properties - seems to work a lot!
  2. Unplug/replug hardware programmer/try again (lots of times, don’t be shy!)
  3. Check programming cable for flaws (mine is hand made and has failed before)
  4. Check DC supply power with a multimeter (is your power supply working right?)
  5. Check voltage to the microcontroller - is your circuit loaded any differently? Are you powering that off anything that’s busier today?
  6. Use mplab-ipe to perform a full erase, blank check… Repeat step 2 if blank check fails until it doesn’t
  7. Use dedicated port / powered USB (could be 500mA port current limit)
  8. Try changing all your other leads too, just in case
  9. Look for flaws around the programming lines leading to the micro (beep test using multimeter is very useful)
  10. Restart your computer (unlikely to help but worth a try… Sometimes an errant process holds on to the driver)
Finally, stay cool headed and remember it probably will work soon. Well, eventually. Maybe. For a little while.

Tuesday 19 July 2016

Debugging the analog data bus of a Roland JX3P

I've been having problems with my JX3P (modified with a Kiwi 3P kit) recently. Had been working very well for months and months, but everything went extremely quiet one day. Kiwitechnics have been super helpful and suggested that it didn't sound like an MCU problem, and gave me a couple of debugging tips. They are awesome.

Last weekend I finally had a few hours to investigate, so I opened her up!




Everything sounded extremely quiet with the occasional very loud noise. The moment it went loud was not completely random... It seemed to coincide with more than 3 key presses, although not consistently (sometimes I could press 6 keys and nothing would happen... At other times, it causes the loud noise).

First off, I checked the volume levels of the patches. Strangely, they had all become extremely low. I'm not sure why, but my guess would be a knock. In the course of this analysis, I realised there was a weak connection inside one of my MIDI cables - this could be related :)

So I increased the level of the active patch, and sure enough I could hear the notes again. However, a problem remains (otherwise I wouldn't have bothered with this post). The same reproduction steps I described above now make everything *even louder* despite the VCA levels for the active patch being set to maximum!!

A few knowns at this stage:

  1. The problem is related to multiple (more than 3) keys being pressed at once (although the pattern isn't obvious)
  2. It seems to be unrelated to specific voices (as each voice chip works fine in isolation)
  3. The resultant level goes beyond what it should be capable of (max VCA level on any given patch)

So I started looking a the multiplexers used in the JX3P... the "4051" chip:


 

The 4051 is a clever little chip. It can be used to multiplex (bring together) or de-multiplex (split apart) 8 individual signals. The signal active on pin 3 (the input or output signal pin... depending on whether we are demultiplexing or multiplexing, respectively) is determined by the bit pattern on pins 9, 10 and 11. Whether it is multiplexing or demultiplexing is controlled by the signal on pin 6.

This chip is used to set the different analog outputs that control the various blocks in the analog signal path of the JX3P. Including the VCAs for each voice chip (which control the amplitute of the notes...)


Wednesday 13 January 2016

Achieving change

We've been discussing several radical policy changes recently at Bluefruit. One of these is the possibility of getting rid of Annual Leave altogether and encouraging people to take more responsibility for finding the right time to take a holiday. This idea in itself is worthy of a blog post (or a book...)

On a highly related note, a regular discussion theme we have with people who come to us with product ideas they would like help with, is about the conflict between the value of having a system which people are familiar with (due to previous incarnations of it) and a system which is more innovative (and therefore less familiar).

I spent some time articulating this for a few of our customers this week, and realised that it's something worth sharing.

There are several stages to achieving change. To move from one stage to the next requires investment to overcome a unique "energy barrier", the nature of which depends very much on what you are trying to change and how quickly and radically you are trying to change it.

1. Existing doctrine
The concept to overcome here is that "it is the way it is because that's the way we do it...". That doesn't mean ignoring the positive aspects of the way it is in our attempts to innovate, simply that the fact of it being a certain way is not good rationale for it remaining that way.
 
2. Theoretical acceptance of change
Once it's been agreed that change is worthwhile, a new proposal has to be made which is theoretically accepted (not only by customers, but also management and engineers, in the case of Bluefruit Software).
 
3. Practical acceptance of change
Ah, the move from the theoretical to the practical! Not to be undestimated... I wouldn't say it is easier to accept theoretical change than practical. In some ways, and again depending on the exact context, it's much easier to accept something practically because you get to experience the change and decide on whether it's positive or not.

4. New doctrine
Once everyone has accepted that the change works on a theoretical and practical level, it is still necessary for that change to become the new "doctrine". A really great idea can be borne out practically and then completely ignored by everyone who should be adopting it.

(and repeat... if you're lucky!)

Sometimes the energy barriers involved in achieving new doctrine are simply not worth overcoming due to the energy investment required, but more often it's a matter of approach.


Some of the energy barriers involved :

- Familiarity
- Time
- Cost
- Information "battle"
- Regulation

If change can be experimented with in bite-size chunks, it helps a lot, because you're moving less change through the process. If we imagine for a moment that "change" is something physical and the energy barriers involved are related to mass and physics, it is very easy to visualise what's happening when we "push through" innovative changes.

This is one of the huge advantages of Agile, as it reduces the cost of change by placing more emphasis on practical experiment, and also relies on stakeholders being integrated with the process.

Always interested in discussing this if anyone is keen!

Thanks

Saturday 31 October 2015

TDD with VHDL

We've recently been doing some interesting prospective work on FPGAs with VHDL and I thought it would be cool to try and update www.cyber-dojo.org to support the VHDL language.

Cyber Dojo has been our in-house training tool of choice for a long time now. It essentially enables you to practise programming the TDD way in a web-based environment. Multiple developers can join the same dojo session and we all review together at the end.

The first challenge was setting up a development envrionment for Cyber Dojo, which essentially meant getting a "Ruby on Rails" Linux server up and running. I chose to do this from scratch rather than paying for an image, so I went for the Manjaro distribution for a few reasons :

1. It's Arch based, and Arch is relatively lightweight
2. If we were using raw Arch it would be a pain the ass
3. I've used Manjaro before and it was relatively painless

After some fiddling around, I decided to use the command line of Passenger to create a development HTTP server using our forked cyber dojo code. Passenger can also run through Apache but it looked like a lot more of a faff.

All credit to my team member Pablo Mansanet who then found GHDL here :
http://home.gna.org/ghdl/

... and created the relevant docker files + other changes to Cyber Dojo to support the VHDL language.

We tried it internally (twice).... ironed out a couple of problems (largely related to renaming files)... pull-requested to Jon Jagger.... and wham! VHDL is now supported by Cyber Dojo :)


library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;

entity hiker_testbench is
end hiker_testbench;

architecture test_fixture of hiker_testbench is
   signal meaning_of_life_test : std_logic_vector (7 downto 0);
begin
   UUT: entity work.hiker port map (meaning_of_life_test);

   process
   begin
       wait for 1 ns; -- Signal propagation
       assert (meaning_of_life_test = "00101010") -- 42
               report "Meaning of life value incorrect"
               severity failure;
      
       assert false report "End of test" severity note;
       wait;
   end process;
end test_fixture;


If you fancy a more appropriate challenge than the existing exercises, we implemented a "half adder" as a test exercise which didn't stretch any of our less VHDL-savvy developers too badly (and also provided a base for a full adder so the guys who have more experience could leap ahead).

https://en.wikipedia.org/wiki/Adder_(electronics)

Enjoy! :)

Saturday 21 February 2015

C++ BDD with Igloo/Bandit/Homebrew


It's been a while since I posted, so I thought it would be good to give an update.

We had some problems working with SpecFlow using C++, mainly around the "shim" layer we created to provide interopability.

The major issues were :-

1. Debugging ("the breakpoint game" trying to debug across layers)
2. C++ developers are not C# developers, and not all of them want to be
3. Management perception of reliance on C# as a risk
4. Fragile shim layer that doesn't allow the transport of all data types/classes

There were also a plethora of "niggles" which we mostly worked out, but overall no one was particularly happy with the solution.

Instead we decided to try "Igloo", a C++ BDD framework one of our developers found (we layer upgraded this to "Bandit"). It essentially provides macros to facilitate human readable specification tests :-
http://banditcpp.org/

We augmented this and developed Python scripts that allow us to translate a feature file something like this :-

 

   Feature: EatingApple

   Scenario: First bite


    Given an apple waiting to be eaten
    When teeth are sunk into it
    Then juice flies around it



Into this :-
Feature("EatingApple") Handler(EatingAppleSteps)
 
   GTestID(first_bite, EatingApple)
   Scenario("First bite")
      Given("an apple waiting to be eaten") StepPlay(GivenAnAppleWaitingToBeEaten)
      When("teeth are sunk into it") StepPlay(WhenTeethAreSunkIntoIt)
      Then("juice flies around it") StepPlay(ThenJuiceFliesAroundIt)
   EndScenario

At this stage we are now looking at directly executable C++ code. Cool, right? It uses something rather magical called a StepHandler class. As you can probably see, it manages the parsing of parameters in the human-readable feature file (I've left them out of the above example for simplicity) :-
class StepHandler
{
public:
   std::string GetCurrentLine();
public:
   void SetCurrentLine(std::string line);
   void Echo();
   void NotImplemented();
   int get_CurrentRow();
   void set_CurrentRow(int currentRow);
   void IncrementRow();
protected:
   std::vector<std::string> m_parameters;
   std::string m_currentLine;
   int m_notImplemented = 0;
   void ClearParameters();
   void ExtractParameters();
   void StripDelimiters(std::string& parameter);
   std::string GetStringParameter(int index);
};

The outputted C++ code's Handler() macro indicates which step handler the StepPlay() macros use to make their calls :-
class EatingAppleSteps : public StepHandler
{
public:
   EatingAppleSteps();
   ~EatingAppleSteps();
   void GivenAnAppleWaitingToBeEaten();
   void WhenTeethAreSunkIntoIt();
   void ThenJuiceFliesAroundIt();
};

In case you hadn't already realised, the constructor and destructor constitute the setup and tear down for each of the tests.

Beautiful, right? We now have a full C++ BDD system with many supporting scripts, fully integrated with google test, and are very happy programmers again.

Credit to James for his amazing work on this system and Seb Rose for the warnings which helped us avoid several dangerous pitfalls on the way - primarily the importance of synchronised feature/executable feature files. We generate an error in the build if they don't match. This was an excellent steer.

Feel free to contact me if you have any questions :)

Sunday 15 June 2014

BDD is TDD

BDD is for anything that you can talk about with non-technical people (eg. user interface, power consumption requirements etc.)

BDD is still test driven development.

Sometimes though, we will want to write stories and/or tests that have no direct appreciable benefit to a non-technical person, in which case there isn’t really a ubiquitous language requirement anymore.

They benefit us though, because tests help us code :)


BDD of Unmanaged C++ using Visual Studio 2013 and Specflow

As part of a great BDD training session by Seb Rose, we went about setting up an executable specification environment for our existing C++ codebase. I'm going to talk very briefly about what this is for, and then go into some technical detail about some of the problems we had, and how they've been resolved.


A bit about BDD
BDD - or Behavioural Driven Development - is a method by which you convert non-technical, "behavioural" requirements into executable code, which can then be used to verify the behaviour has been implemented.

Through a process of deliberation with a product owner, tester and technical person, you derive stories, rules, examples and scenarios in a language understood by everyone, which then becomes an executable specification, which you can use to verify behaviour of the system.


Specflow, Cucumber and Gherkin
The scenarios are parsed. If you're uisng Specflow, the language used is Gherkin, which is parsed by Cucumber. You can read all about here :-
https://github.com/cucumber/cucumber/wiki/Gherkin

The output of this process is C# "feature" stub code for all the Given, When and Then statements found with a feature, with some regular expression powered attribute tags to parse the scenario text into function parameters.

This is cool.


Our setup
We were setting up Specflow in Visual Studio 2013 for use with C++ code that is cross-compiled in IAR Embedded for ARM chips. We do all our unit testing in Visual Studio because frankly, IAR Embedded is terrible. Our test target code is therefore unmanaged C++ code.

This introduces some significant complexities.

We created a fresh C# MSTest project within our existing unit test solution. I've called this "SpecificationTests" in its latest incarnation. We then added Specflow to the plugins for this project (right click the project --> manage NuGet packages).

C# can't talk directly to our unmanaged C++, so we had to create a "shim" layer in C++/CLR.

I was hung up for a while on a good way to organise this underyling "step test code", and Seb suggested by domain entity, so I created a "ScreenSteps" unmanaged C++ project, linked to our REAL code, and a "ScreenStepsShim" C++/CLR project linked to the ScreenSteps project and then added ScreenStepsShim as a reference to the SpecificationTests C#/Specflow project. The "domain entity" in question here, in case it isn't obvious, is the Screen!


The Problems We Had And Their Resolutions 

Specflow extension needs to be added to Visual Studio
In addition to adding the plugin to your project, you need to go Tools / Extensions and Updates and search in the online section for SpecFlow, then add that. This was the easiest problem :)


Code generation

Runtime library needs to be the same for projects within the solution.

"Inherit from defaults" seemed to disappear and reappear as an option at will in 2013, so forget about that. 

I set everything to multi-threaded debug DLL (Project properties / C++ Code Generation / Runtime Library). I also added the google test project we're using for unit testing into the solution so I could re-build that rather than hard-linking to it.


DLL linkage

Because the unmanaged DLL wasn't exporting the functions, a .lib wasn't being generated and the shim couldn't link to the steps DLL underneath it.

What was missing is DLL export code in the unmanaged C++ header :-


#ifdef SCREENSTEPS_EXPORTS
#define SCREENSTEPS_EXPORT __declspec(dllexport)
#else
#define SCREENSTEPS_EXPORT __declspec(dllimport)
#endif

class SCREENSTEPS_EXPORT ScreenSteps
{
   ...
};

Now there are functions to be exported, so a .lib is generated (otherwise the linker assumes it is pointless, because there are no functions to call).


Unmanaged DLL not copied by default

AKA "exception thrown by target of the invocation could not load file or assembly or one of its dependencies"...

This was a fun one. When you start trying to call your unmanaged code, it won't work unless the library has been copied into the folder the specflow tests are executing in. The managed library gets copied no problem, but for some reason the unmanaged one does not.
 

Then, when you try and set the "copy local" option to true, it resets itself to false!!! Wow.

This is apparently a common problem for Microsoft. See here :-
https://connect.microsoft.com/VisualStudio/feedback/details/766064/visual-studio-2012-copy-local-cannot-be-set-via-the-property-pages

They fixed it for a while, and it broke again! So now you have to do it manually in the project file!! Wow.

 Adding "<CopyLocal>true</CopyLocal>" to the vcxproj file for ScreenStepsShim seems to work.

<ProjectReference Include="..\ScreenSteps\ScreenSteps.vcxproj">
      <Project>...</Project>
      <CopyLocal>true</CopyLocal>
</ProjectReference>


Test Explorer hang during test run
This is really annoying. It doesn't happen if you just re-run the tests after changing the feature file, but does happen if you change the code THEN re-run the tests.

When you re-start Visual Studio, you can re-run the tests!

The problem is caused by vstest.exectionengine.x86.exe... I proved this by killing the process and re-trying without restarting Visual Studio. It seems to get excited when you re-build the other libraries and prevents completion of the specflow test build.

Finally I found this :-
http://stackoverflow.com/questions/13497168/vstest-executionengine-x86-exe-not-closing

Add a pre-build event :-

for 64-bit:
taskkill /F /IM vstest.executionengine.exe /FI "MEMUSAGE gt 1"

or for 32-bit:
taskkill /F /IM vstest.executionengine.x86.exe /FI "MEMUSAGE gt 1"

This entirely solves the test runner problem.


std::string to String^ conversion
Unmanaged string to managed C++ String^ (which C# can interpret into a C# string) requires a gcnew in the C++/CLR project, like this :-

      string original = someUnmanagedObject.get_SomeText();
      String ^converted = gcnew String(original.c_str());



Hopefully this saves other people some pain.