Arduino + ESP8266 hacking

This post is quite a deal more technical than my usual fare. I’m doing this as a write-up of some of the learnings, in part to share with participants at the OzBerryPi IoT meetup that occurred last night.

I’ve recently been investing some time into a project using Arduino as a wifi-enabled energy monitor.

I had an hypothesis that open-source hardware such as the Arduino and related chips like the ESP8266 wifi chip were approaching the point where someone like myself, with a reasonable degree of experience in web development, might be able to build a low-cost embedded electronics project (and potentially, down the track, product).

TL;DR

  • Existing libraries are firmware dependent. Few work with the current version of the ESP8266 firmware. Be sure that you can flash your chips with the target/tested/stable firmware.
  • The current version of the ESP8266 firmware seems to have some bugs/quirks (returning mangled responses, or cruft at the beginning of responses, at times). I wouldn’t consider it “reliable”. YMMV on older (or future) firmware versions.
  • There’s tonnes of information snippets out there, but there’s few “definitive” guides to get up and running. There’s a lot of information that is either stale or inaccurate (now) based on current firmware/library versions etc.
  • Arduino quirks and low memory conditions make it challenging to work with strings without understanding C/C++ character arrays and related methods/utilities. The String class is problematic (buggy, poor memory management) and should be avoided if possible.
  • Design to use very short/small strings to be passed between client and server. Don’t try to implement full HTTP requests/response construction/parsing.

Project goals

The basic gist of the project is to have a power-point level energy monitor, similar to the Belkin Wemo but at a reduced build cost, so I could run 6–12 units in a variety of locations/contexts (i.e. potentially 20–30 units).

The parameters of my “experiment”/project:

  • Operate at a power point-level (i.e. for individual power points in a house), not aggregate (i.e. whole of house)
  • Target a USD$10–15 (or less) component cost per unit
  • Built upon common open-source hardware and readily available sensors etc.
  • A hardware architecture/spec that could be scaled up later (i.e. not something as a “one off” hobby approach)
  • Support 5v analog sensors
  • Be wifi-based (not bluetooth)
  • Be self-contained (i.e. not require separate power supplies etc.)
  • Get access to the raw backend data
  • While longer term the vision/idea was to have individual points send messages to a central unit on the same network, with that central unit communicating with an online service, for my first prototype I wanted the device to post directly to the internet (so I didn’t have to write server code just to get a basic prototype working).
  • Ideally, post to an existing online “Internet of Things” server, such as Thingspeak, Phant.io/data.sparkfun.com etc.
  • Had a strong development community/base (incl. documentation)
  • Didn’t require coding/linking to an online service to get basics working (like posting code to the device)

I note the parameters above as there are a number of existing projects (such as the excellent OpenEnergyMonitor that meet some, but not all of the requirements. There are a number of systems/architectures USD$25+, such as NodeMCU-style development boards that have wifi built in. The Particle range, the soon to be released Tessel2, the Arduino Yun, etc. But these wouldn’t meet the price point, nor the longer-term “architecture” goals.

Initial exploration

Doing some preliminary research it seemed that this should be possible using Arduino with the Espressif ESP8266 wifi chip. I also worked out the similarly low-cost Allegro Microsystems ACS712 chip could be used for sensing energy usage. A quick inventory suggested that these two components combined with an Arduino (for example a non-official Nano, that are available on Ebay and Aliexpress at very low cost) would be feasible.

In fact, the single most expensive component, it appeared, would be the switching power supply.

Doing a bit more research, I found were lots of “getting started” tutorials that suggested this would be doable. Most get to the point of sending basic “GET” commands for the ESP8266, and for reading DC current using the ACS712 sensor.

So I got started…

Getting started

Energy monitoring

Current sensor + Arduino Uno + Wifi shield

I got the ACS712 working quite quickly with a DC current and basic sketch. Working with AC current was a bit more challenging, but I was able to find sufficient documentation online to take samples over a period to calculate RMS voltage and derive current from that. A lot of the projects assume a inductive sensor (rather than one that could connect directly to the mains).

While I found it a little frustrating piecing together bits of information from all over the web, overall it was a relatively painless experience (about 1–1.5 days effort I would estimate). With the assistance of a friend who maintains and repairs power supplies for a living, I built up a basic test rig containing the ACS712 sensor, that could plug into the Arduino prototype rig I was developing (see picture above).

Wifi

On the wifi front, I thought I’d make things easier for myself by using a pre-designed shield that used the ESP8266 chipset as a means of getting up and running, knowing with a view to moving towards connecting directly to an ESP8266 chip (and reducing cost) using the same code/library/architecture.

I picked up the Sparkfun ESP8266 shield, and used the library that was provided by Sparkfun. The board would power up (sometimes) and the blue status light with flicker then disappear. Other times it would remain on all the time. Other times it would be in between. The library consistently returned “Could not communicate with the device errors”. I tried running it using breadboard jumpers, rather than using the header pins as a shield configuration. While this was more stable, it was still very inconsistent. This seems to be a power-related issue. I’ve since been using an independent power supply for other ESP8266 units, but this didn’t fix the issue for the shield (i.e. when sending VCC directly to the shield).

This was the beginning of a very time consuming and frustrating experience with trying to get the Arduino/ESP8266 architecture, using AT commands, working.

After a lot of head scratching, and starting to reverse engineer the SparkFun library to see if I could work out what was going wrong, I switched tact.

I bought an ESP8266-01. I didn’t have a lot of experience with serial comms and using FTDI interfaces etc. it was a bit of a slog to work out how to get it communicating with the Arduino. I also had to buy a logic level converter and an FTDI interface. With a big assist from some folks at the OzBerryPi IoT meetup a few weeks back, I got comms (simple AT commands) working first using CoolTerm (a serial terminal), then the Arduino, via the logic level converter. I also added a more stable power supply, so as not to rely on the Arduino 5v/3v3 VCC outs (see pic below for the updated development rig).

Arduino Uno + logic level converter + power supply + ESP8266-01

Once these comms had been established, I searched for libraries that used AT commands to communicate with the ESP8266 chipset. I found a couple of promising ones (for example, WeeESP8266 and ESP8266_Simple), but these didn’t work consistently. Sometimes this seemed to be a firmware issue (expecting certain responses that had changed). Sometimes this related to memory issues (with Serial.println() statements getting corrupted or not working). Other times it would be a non-responsive chip (not responding to the most basic AT command).

Again, I tried to reverse engineer/trouble-shoot with the existing libraries, but found them a little light on documenting what was going on/expected as a response, and as such it was difficult to decipher and work out what was expected to be received vs. what was received etc.

From what I’d read, I realised that most code examples were very sensitive to the firmware version and AT command set version supported by the device. So I worked out how to update the firmware, but was unable to find the specific versions that the libraries needed that I could install via Mac. (There are a lot of references to 0.9.2.4 as being a “stable” version.)

Eventually, as a “last resort”, I opted to implement a “first principles” approach using the latest firmware (reporting the version number as “0020000903”) from Espressif, which I was able to flash to the device successfully using esptool. This was after losing a few hours working out how to reset the baud rate of the ESP (which was now too high for the Arduino). (“AT+IPR=9600” did the trick—thanks Andrew Rapp.)

I got a basic (albeit what I would consider brittle) version working—despite inconsistent range and formats for AT command responses and weird cruft either at the start of the AT response, or mangling the body of the responses. Then I tried to refactor into my only library for cleanliness. Which of course borked everything (again, seemingly memory-related issues).

I presented my experience to the OzBerryPi IoT Meetup group last night. Details of my experience/experiment with this approach are available at my Bitbucket repo of the resultant code.

Days have passed. Still no stable code.

Strings and low memory

Admittedly, not all of this is the ESP’s fault. While there are definitely some quirks and issues with the AT firmware, a lot of it is to do with the Arduino’s constraints and quirks. Two in particular being low memory, and limited string functionality (which are related).

Arduino code is a weird in between of C and C++. It doesn’t implement the std::string library, nor common string parsing and manipulation tools like regex, apparently due to the memory and storage limitations of the device. The fact that I’m new to C/C++ didn’t help either.

The Arduino environment does provide a String class (not to be confused with std::string) but is very limited, and by all reports is both buggy and causes memory issues. This means that you’re often having to work with raw character arrays to do string manipulation. There are a bunch of functions to support this (like strstr, memcpy, strcat, strcpy etc.), but it’s a long ways away from “easy” and straightforward.

Lastly, there seem to be very limited tools for debugging and monitoring memory etc. on the Arduino. This makes it difficult to know where the problem lay, when things just randomly fail and there’s no clear way to determine if it’s a memory issue. (I’m aware there are some rudimentary memory monitoring tools/libraries around—but when Serial.println is failing, there’s not much you can do to troubleshoot.)

Why spend so much time?

An obvious question arises: wouldn’t I have been better off spending the $25 for the more expensive, but more stable (one would hope, anyway), architectures to get the data and get started on the bigger picture goal I have in mind? A false economy, perhaps…

I’ve asked myself this numerous times along the journey, and decided to continue persevering because of some “bigger picture” motives, and other considerations, that suggested investing further time in trying to get this to work. For example:

  • There was no guarantee these more expensive units would be more stable in practice/reality
  • Often these units required entirely different code bases (e.g. NodeMCU uses Lua, Tessel2 uses io.js etc.) and often these are not as well documented, as the technology and communities around then are still nascent
  • Some development boards didn’t have the ADC (analog to digital) conversion capacity—e.g. the ESP8266 has only one ADC, and it requires a logic level conversion down to 1.8v (from 3v or 5v source, depending on your sensor)
  • I wanted to see if I could get this low-cost architecture working with a view to potentially developing a product to sell in the future, so was trying to stick with components that would have a chance of meeting a target price point
  • I wanted to see if I could get a library working with the current firmware version, to contribute back to community (given there were limited options available)
  • I was using this entire exercise as a learning exercise in developing for devices and hardware, and I was learning a lot of the nuances of C++ (and the Arduino environment, which is not true C/C++) which just takes time (so I’m thinking of this as professional development/training, to some degree)

I think, however, it’s time to let go of my original ideas/approach, and start looking at alternatives.

Possible next steps

I have a couple of ideas about how to get a stable implementation using this kind of architecture:

  • Use a different platform (as noted above) and wear the additional expense/unit to get the basics working, and revisit down the track (or wait for the cost of the other platforms to come down, as has happened recently with the RaspberryPi Zero)
  • Continue to experiment with different libraries (ESP8266wifi looks promising, seemingly developed with low memory situations in mind, but tested on 0.9.2.4 firmware.)
  • Ditch the use of HTTP/POST method and go for a “lighter-weight” protocol. Either to a cloud-based server I control/code for, or on a central device on the local network that then posts to the cloud.
  • Use Arduino on the ESP8266 chip and work out how to deal with sensor input given the limited ADC capabilities. My understanding is some ESP variants have more memory than the Arduino boards, which may make life easier too.
  • Run Arduino code on the chip and use the built-in libraries, implementing a simple, text/serial based interface that is then run on the chip. (Or you could use NodeMCU in a similar capacity.) This would be terribly difficult to debug, and far from desirable.
  • Look into embedded programming practices (e.g. coding for the Atmel out of the Arduino environment, or coding for the ESP8266 using the SDK directly)

I’m very tempted to take option 1, unless someone cracks the nut with a stable firmware/library combination.

Suggestions

I’ve had to invest a lot more time than I intended/wanted to get to this point. So I thought it would be helpful to share some more top-level thoughts based on that experience. I’m hoping this will be useful to someone considering this sort of architecture for their own project. This is what I’d tell my prior self, if I knew what I know now…

Don’t rely on the AT firmware I would seek alternatives to AT command firmware to get a reliable system up and running. The latest version seems to be buggy and, while better documented than previously, the current docs still leave a bit to be desired, with inconsistent response formats and some missing responses for certain states (for example, sometimes the device will return “no ip” as the message, but this is not in the documentation, sometimes it will return a “busy…” message, but I’ve not seen this in the docs either) etc. See Possible next steps above for my thoughts on how you could get around this. YMMV with future (or older) versions of the firmware.

Develop to a specific firmware version If you do find a specific firmware version and you develop code for it, stick with it. Flash all your devices so you’re working with a consistent/known base that you can be confident with. As noted earlier, 0.9.2.X seems to be a popular choice/period, but I couldn’t work out how to flash the chip on a Mac to this version.

Be a hard-ass on the KISS principle Given the constraints of the Arduino environment, it’s difficult to build libraries that handle generic use cases (see next point). I’ve found that refactoring into libraries that are more “generic” in what they try to do is where a lot of issues are introduced, due to bloat, additional memory requirements, passing objects/data around in memory etc. I would recommend a laser-like focus on what you need it to do and keep it as simple as absolutely possible to achieve that objective. i.e. Build only what you need, resist temptation to “library-fi”.

Only use simple text strings for comms Given all the limitations noted above, trying to do anything beyond very basic string manipulation is not advised. For example, the Thingspeak API returns a 700 character plus response to a POST message. This is a big string to handle on an Arduino. Creating a basic POST request is about 256 chars.

The lack of std::string functionality and other string manipulation tools also makes it quite cumbersome. I now understand why a lot of folks throw out the HTTP verbs other than GET! I’m aware there’s some libraries around (like Arduino JSON or the MQTT libraries like pubsubclient or Adafruit’s implementation etc. would probably help with some of this… but given the previous point re: KISS + my experience working with the AT libraries for the ESP8266, you’ll have to forgive me for being nervous about relying on libraries too much.)

So, if you’re communicating with a server you have control over, and you can build very basic API input/output protocol, this would probably be the best way to go. The idea I mentioned earlier of having a central unit receiving the smaller units’ data using a simple text protocol to a server running on a beefier device, that is then relayed up to the cloud using the standard HTTP stack. For example, run a Raspberry Pi or BeagleBone as a central point, running a node.js or python-based server. Keep the individual points doing the absolute bare minimum (getting a basic text string to the local server) and do the “heavy lifting” on a device with more headroom.

“Growing” 3D printed objects

I found this demonstration and talk by Joseph DeSimone of Carbon 3D, explaining a new method of 3D printing where the elements are sort of “grown”, really interesting.

I’ve been fortunate enough in my current role to be able to explore 3D printing—working with great peeps like Mel Fuller from Three Farm and Matthew Connolly from me3D in teaching this technology to young folks.

It’s really piqued my interest—I’m fascinated by the possibilities. I’ve been looking into various approaches to 3D printing bikes (perhaps unsurprisingly!), among other things. But I also see potential in creating parts for things like robotics projects.

One of my “alternate lives” would be an industrial designer. I’ve long had an interest in building things in real life (starting with my love of LEGO, but extending to radio control cars, and dreams of being a robotics engineer at one point). But I’ve never quite had the skills or equipment to pull that off. I thought about heading back into study of industrial design at one point, but wasn’t quite convinced it was the right path for me.

What I’m finding most inspiring/interesting about 3D printing is that it brings into reach many of things that I always dreamt of being able to do. I was very excited recently to be able to 3D print an iPhone stand for a custom application at work. Not knowing how to use the software at the start of the day, we were able to get a first prototype designed and printed within the space of a few hours.

Speed and strength are two key issues with the resulting output for some applications. For example, if I was still actively working on RC cars, I can see countless opportunities for customisations and enhancements using 3D printed parts, but they would need to be quite strong.

The Carbon 3D technology is much faster, supports a wide range of source materials, and is stronger—so seems to address a lot of those issues.

I also think about applying this sort of thing to creating the robot pieces that I envisaged when I was a youngster, attempting (unsuccessfully) to build a robot with an articulated arm out of wood.

Combined with my ongoing interest with robots and technologies like the Raspberry Pi and Arduino, I see the potential to fulfil those childhood/teenage dreams.

Suffice to say I’m finding the whole “digital making” space very inspiring at a personal level (and wishing I had more time in my professional capacity to explore and play with the tech that we’re teaching at IDX!)

Robots at TED

Anyone that’s gotten to know me reasonably well is aware of my love of robots. There’s been a feast of stuff on TED recently on the topic—below are two standouts for me.

Raffaello D’Andrea: The astounding athletic power of quadcopters

I love the way they use the metaphor of athleticism to create some new ways of looking at how these robots work. Some really impressive work, esp. the adaptation to losing two working rotors!

Rodney Brooks: Why we will rely on robots

I’m not sure I’m totally sold on the idea of a heavily robotic supported future, but this is definitely food for thought.

Mazda’s approach to sustainability

Reading “Mazda SkyActiv is a novel approach to fuel efficiency; will it work?” over at Autoblog Green got me thinking. The article outlines how Mazda is eschewing hybrid and EV technologies (in the short term) to instead focus on light-weighting and efficiency.

It’s an interesting approach. I’ve written before about the “hyper car” concept outlined in Natural Capitalism by Paul Hawken, Amory Lovins and L. Hunter Lovins. Whereas the Lotus vehicle I was responding to in that post was just a concept, it’s interesting to see a mainstream brand like Mazda (which seems to be a bit more prominent in the Australian market than the US based on the Autoblog Green article) taking this approach to market. (It’s interesting to note that Audi have also announced a carbon-fibre project using an Australian partner.)

Autoblog Green ask if it will work — indeed, will it sell more cars. I think it’s actually a reasonably smart approach. The jury is still out on EVs and hybrids and the specific technologies that might “win” the race (including hydrogen fuel cells). With EVs taking a little while to gain traction in the market, there is a strong argument to holding off significant R&D expenditure in this area until the market is more mature.

(That said, I still think that electric vehicles will end up being the technology of choice, regardless of power source. And it is definitely important that some manufacturers lead the way, as Tesla and Nissan, among others, are doing.)

Regardless of which technology gets up, the measures that Mazda is exploring will all be relevant. And in the short term, with consumer uncertainty (and the high relative up-front cost of hybrid and EV vehicles), focusing efforts in this area can only provide benefits to the Mazda brand. That is to say, for those customers that aren’t ready to make the switch to EV/hybrid, the fuel efficiency benefits would likely be of appeal (and therefore to have an impact on sales). But it won’t be long before Mazda will need to start investing more heavily in alternative fuel/power train technologies. I’m sure, however, that they are keeping a close eye on developments and will be ready once a dominant approach appears. Definitely one worth watching…

Where is my robotic companion?

636x460design 01

I bought this t-shirt from Threadless some time ago — it was a tongue-in-cheek reflection on all the fanciful things that we saw in movies and cartoons as a child growing up that hadn’t quite come to fruition yet. It seems, though, that the second item on the list — that of the robotic companion — may be pretty close at hand.

I’m not one to make predictions, typically (even if it is new year’s day) but a couple of things that have recently come across my radar have got me thinking that the age of the robotic companion is coming — probably in about 5 years time we’ll see the first commercial versions.

What are the developments? Apple’s Siri is probably the most mainstream. This is natural language recognition and response in a consumer-grade mobile phone. Yes, internet assisted (much of the processing is done by much more powerful computing hardware than the phone). Yes, it’s early days yet (anyone that’s used Siri will be well aware of it’s limitations).

But what’s so special about Siri? Haven’t we had voice recognition on phones for some time? Siri is more than just recognition of pre-defined commands — it incorporates natural language processing, which is a real leap forward in my view.

In 5 years’ time, talking to our phone will be as natural as touch gestures are today, I suspect. This will in part be aided by the continuation of Moore’s Law, as Mark Pesce recently reminded us:

By 2020, some of us will be walking around with a teraflop in our pocket, interpreting our speech, watching our gestures, and effortlessly handling sophisticated social transactions – invisibly, continuously and tirelessly.

The second is Boston Dynamics’ “PETMAN” project:

According to Gizmodo, Boston Dynamics expects to deliver PETMAN to the US Military as early as this year (2012). Given that, I suspect that the video that has been released is probably of an older variation of the technology, so is probably even further advanced in the lab.

The third is the advancement of facial expressions in robotics. See, for example, Actroid-F:

While the latter two examples are still in the research and development phase, and are clearly going to be extremely expensive, as is often the case with these things, these early developments will no doubt trickle down into consumer-level products shortly after. I’m putting a stake in the ground and saying 5 years (though it might end up being 10).

But in either case, at least one of those “damned scientist” wishes is just around the corner…

Follow-up: TripIt confirmed bug

Just a quick update on the whole TripIt debacle.

The TripIt support team were very good in responding to the problem. While I didn’t get any money back, they didn’t throw me any legalese/boilerplate response, and took the issue seriously. Kudos to Ruth, the support rep who was my primary contact, for handling this well.

They responded by offering me an upgrade (worth $49 in $$ terms, but pretty useless to me in the context of what has happened, as the primary issue is the fact I no longer feel like I can trust the application) and looked into the matter.

The first suggestion was this was a daylight savings issue with my phone, (as a few folks have suggested to me personally or via Twitter) but I pointed out this didn’t seem to make sense because:

  1. The earlier flight on the same day is also displayed as AEDT and this is displaying correctly as 6:15am (as per the web-based application).
  2. The support team asserted that “in Australia and on April 3, 2011, Daylight Savings Time ended and I believe because the last flight (Virgin Blue 885) coincided with that date”, which was incorrect. The flight was for April 2 at 7:15pm and flight time was 1 hr and 15 mins, meaning I would have arrived in Sydney before 9pm on April 2. DST didn’t end here in Australia until 2am on April 3, well outside the range of that particular flight.
  3. Even if the flight did cross timezones, the departure time should reflect the timezone of departure, not the destination, so this still should not have happened.
  4. I confirmed the bug in both Melbourne under daylight savings (when the error occurred) and in Sydney (upon arrival the following day) outside of daylight savings — which suggests that it was not an issue with the settings on the phone, as the problem should not have exhibited before or after the timezone change, according to this explanation, but it clearly occurred in both timezones.

After this response, the team looked into it further and found:

It appears that in our system, for Melbourne, Australia, our system had the April 2 date listed as the end of Daylight Savings Time for EST.

I’ve immediately filed a ticket with our engineers to make sure that daylight savings time is properly picked up for Melbourne to fix this issue going forward. I’m also having our engineers double-check all timezones in Australia.

So the issue was confirmed as a daylight savings issue, but not related to my phone or setup.

While I still don’t think that particular finding fully explains the issue (if it clicked over on April 2 instead of 3, why was the first flight time on the same day correct?). But at least I’m glad that identifying the issue may avoid future issues for other TripIt users.

Reflections on Flavour Crusader at Social Innovation Sydney

Fc blog 500px

I’ve just gotten back after running a very short workshop session to test the Flavour Crusader application at today’s Social Innovation Sydney meetup and I wanted to take a short moment to “braindump” (more than reflect) about the session while it’s still fresh in my mind. (See my previous post for more background on the project.)

First up, thanks to everyone who participated in the workshop — we really appreciate the feedack. And I’d like to especially thank the volunteers that helped Sharon and I facilitate the session — Angela, Miream, Penny and Tony especially. And also thanks to Michelle and Kate for creating the space in which the session could occur.

While we (obviously) haven’t had a chance to really dig into the more detailed reflections, even the top-level feedback that came out of the session has been really helpful.

About the prototype

For those that weren’t able to attend, if you have an iPhone, feel free to preview the web application. Please bear in mind that this is a very early prototype outlining only some of the core features that have been discussed/considered. In the vein of the “lean startup” the aim is to deliver a “minimum shipping product” to get early feedback and verify/validate design directions before progressing further. It is not fully accessible (we are rapid prototyping using HTML/CSS and JavaScript, but have not tested widely) and today we identified some issues running on Android devices, so your mileage may vary. Adding the app to your home screen on the iPhone and launching from there will give you the best experience.

There is a feedback mechanism inside the application, so please feel free to send us your thoughts and suggestions if you do use the application.

As mentioned previously, the session today aimed to evaluate how well the application, in it’s current form, supported people in the following scenarios:

  • You are on your way home from work and thinking about cooking a dinner with fresh, local produce
  • You are planning a dinner party on the weekend and you want to base it on fresh, local produce
  • You are in a store choosing your fruit and veggies for the week and you want to find out if something is in season

This approach is broadly aligned with the “Can do” phase of Les Robinson’s Enabling Change model. (We’re also giving some consideration to some early social features for the application, especially to create the sense of “Satisfaction” to support sustained adoption. And of course days like today are in part about building “Buzz”, “Invitation” and “Trial”.)

Workshop scenarios

To do this we set up three different “stations” in the room to provide a mock context for each of the scenarios outlined above:

  1. A “bus” where participants were encouraged to consider “on your way home”
  2. A “kitchen table” with recipe books and shopping lists to plan the weekend dinner party
  3. A “store” with a combination of local and imported produce

(I’m hoping that Tony’s photos will provide a visual illustration of the session — I’ll post some links here once they’re online.)

Each participant was given a sheet with areas to reflect on the process they undertook around each scenario, and participants that didn’t have an iDevice (or Android phone) were provided with one, or buddied up with someone who did. The aim was to get participants put themselves into these particular contexts and use the application to support them.

Today was a sort of prototype for the workshop format itself. I’ll be running it again in a few weeks’ time with my uni cohort (and potentially at other foodie events in the future), incorporating a lot of the learnings from today as well. The first lesson about the session format was “more time”: we elected to run a 30 min session, which is clearly not enough given the level of engagement participants gave us today. Next time we will allow for more time at each station.

Another was that with a (somewhat unexpected) large turn-out — we had over 20 participants in the room — we needed a way to allow for group discussion within each station. And thirdly, we found that when participants focused on the “reflection questions” we provided, they were less active thinking about the context of use — e.g. actually using the application. All great learnings to apply in future.

(If anyone who attended wanted to provide further feedback I’d love to hear from you in the comments to this post…)

Early reflections/next steps

One thing that seems reasonably clear, even from early “debriefing” of the session, is that Flavour Crusader’s tight focus on efficacy — that is, providing assistance in how to prepare and fresh produce, including deeper integration between produce items and recipes — is definitely the right path. The challenge with so many great ideas will be to keep that tight focus, and not try to implement everything!

That said, I’m really looking forward to digging in further to participant’s reflections — I’m certain that there’s some great nuggets in that feedback as well. Given the great level of participation, that may take us a little longer than anticipated! But I can’t think of a better problem to have 😉

FlavourCrusader at Social Innovation Camp

A little while back I put a call out for folks that were social media savvy and interested in food to do some interviews for a uni assignment.  The interviews went really well (thanks to everyone involved!) and I’ve been remiss in not reporting back on progress since then.

For my uni assessment I produced two reports and a set of design personas to support the development of the FlavourCrusader project:

  1. Local food production and cosmopolitan localism (PDF 99 KB)
    This paper examines some of the drivers behind the emerging trend towards local and organic produce and the related growth of farmers markets: sustainability, health and safety, quality and taste, and food as experience. It then explores local food production as a form of social innovation, considering its potential for expansion using social technologies.
  2. Report on design research with urban local food customers (PDF 157 KB)
    Reports on the findings of interviews with 5 social media savvy food lovers who purchase locally-produced food.
  3. Personas (PDF 1.7 MB)
    Design personas reflecting the user research and learnings from the initial report looking at local trends etc.

Since that work was completed, myself and the team at Zumio have been working with Sharon Lee, the project lead for FlavourCrusader, on a prototype of the core functionality of the application. The core focus of the prototype is a seasonal food guide and recipes, as these were the core elements identified through the interviews as being useful in a mobile application.

Next Saturday (26 Feb 2011) we’ll be running a session at the Social Innovation Sydney (SI Syd) event in Paddington to get feedback on this prototype. Sharon has done a guest post over at the SI Syd blog about the FlavourCrusader session.

As Sharon’s post points out it’s still very early days — we’re really just trying to provide the bare bones functionality to start getting feedback about what the issues/barriers are and where we should go with it next. Specifically, we’re trying to provide support for the following scenarios:

  • You are on your way home from work and thinking about dinner. How would you use the application to help you choose your dinner?
  • You are planning a dinner on the weekend, how would you use the application to help you plan?
  • You are in a store choosing your fruit and veg for the week and you want to find out if something is in season. How would you use the application to determine this?

There may have other situations where it might be useful, of course — we’d be interested to hear of those if you have any ideas.

Using it “in real life” is obviously the best way to test — so we’re really looking to understand how people go about these things and how, if at all, the app might help. So the session will involve a bit of fun role-playing as well as more straightforward testing.

Our hope is the session will give us an understanding of:

  • How well does the app support this process currently?
  • What frustrations or barriers are there?
  • What needs to be added for people to be able to achieve these goals with it?

In any case, if you’re able to make it down to SI Syd next Saturday — we’re hoping the session will occur just before lunch — I’d love to see you there and get your thoughts.