Category Archives: School

How do I get started in Home Automation development?

TL;DR: Here is a listing of Zigbee and Z-wave dev kits and IDEs with costs and LINKS!

I enrolled in a paid research position at school for the summer term.  Turns out that the research is very much related to the industry in which I work: Home Automation.  We aren’t actually researching home automation, but we’re using home automation systems to collect data.  The reasoning is that the systems are readily available, robust, and infinitely customizable.  The customization I’m talking about is on the back end.  Many sensors and things are available off the shelf and can be integrated into any network using the same protocol.  So, on the back end we can develop a system that operates on the collected data in whatever way we please.

The two primary communication protocols discussed here are Zigbee and Z-wave, the two most popular commercially available systems at the time of this writing.  The reason for focusing on these systems is threefold:

1. Cost.  The widespread acceptance of these standards helps to drive down costs.

2. Support.  The development kits, documentation, and expert support are all readily available.

3. Existing components.  Many off-the-shelf components already collect the data we want, and are integrable with any compliant network.

Network Topologies and Radios

Zigbee and Z-wave are actually standards that define network protocols, and companies simply build microprocessors and radios which meet these standards.  Before Zigbee and Z-wave really caught on, many companies were developing their own proprietary systems and radios.  Some of these proprietary systems were network based, others were very simple control schemes.  Generally, the radios used were around 900Mhz sending data at low baud rates (<=9600bps) for FFC compliance reasons.  The flavor of Zigbee that caught on commercially operates at 2.4Ghz and up to 250kps (depending on the implementation.)  The 2.4Ghz band is significant because it’s globally accepted.  WiFi uses the same band, so it’s rather crowded.

Zigbee creates a star network, where a central “coordinator” routes network traffic through the various network “nodes”.  A node is simply a device that has the ability to pass instructions back and forth from “end devices” to the coordinator.  End devices are really only intended to receive commands and report.  The creator/installer of these systems is generally required to identify the type of each device: coordinator, node, or end device.  The network ultimately resembles a star fish, or perhaps a snow flake (for very large networks).  As mentioned before, the most popular Zigbee flavor communicates via 2.4Ghz, but a ~900Mhz flavor also exists.  Zigbee’s radio uses Phase Shift Keying.  Phase Shift Keying encodes digital signals over the air by reversing the phase of the radio carrier wave.  Depending on the phase of the signal received, the receiver knows whether it is seeing a “1” or a “0”.

Z-wave uses a mesh network, meaning that every device can route messages through any other device within range.  This allows for many routes for data to take around the network.  All the data is still passed up to a coordinating device.  Furthermore, Z-wave implements a sort of mesh healing, wherein lost nodes can be routed around by discovering a new path through other nodes.  This all happens over RF in the 900Mhz range (depending on region) using Frequency Shift Keying. Frequency Shift Keying is a way of encoding digital signals over the air.  The transmitter sends signals on one of two frequencies to send either a “1” or a “0” to the receiver.

So, which radio is better?  Neither.  They both have pros and cons.  Z-wave’s radio is arguably better for indoor use, as the lower frequency is better able to penetrate building materials.  It is also on a relatively quiet band, where the only other traffic is coming from things like weather stations and garage door openers.  Zigbee operates in the same band as WiFi, which raises the noise floor for the carrier waves.

The latest Z-wave chip is only capable of 100kbps over the air, while Zigbee’s radio is capable of 250kbps over the air (at 2.4Ghz).  Does that really matter?  I guess it depends on how fast you need your data to travel.  For very large networks, the transfer rate could be very important.  For your typical 2000 square foot home, this is a non-issue.  Also, the radio performance is greatly influenced by the antenna geometry and load matching.  If you’re developing a product that requires RF communication, you need an RF engineer to design a proper antenna to get the best performing radio.  Luckily for low-budget developers, most dev kits for both systems come with antennas already matched to the dev kit circuits.

Which network is better?  Neither.  They both have pros and cons.  The Z-wave mesh is self-healing, and it routes messages whichever way it can.  The problem is that re-routing a message through an unintended path requires that a node be acting as a repeater, which requires it to listen at all times.  This means more power drain on such a device, which basically means that such a device cannot be battery powered.  The Zigbee star network can be undermined by dying nodes (in the case of battery operation).  Either way, the networks will transfer data back to the coordinator.

In my research I have found that Zigbee end-users complain of some Zigbee compliant devices not working with others.  One goal of both standards is to maintain consistent compatibility between devices, brands, developers, versions, etc.  The problem with Zigbee is that they do not require device certification before it is released to market.  The problem with Z-wave is that they require device certification before it is released to market.  And, it costs money!  Furthermore, Z-wave compliance ensures backward compatibility of new devices with the old, but not forward compatibility of old devices with new.  In short, as new features become available in the Z-wave protocol, older controllers (hubs, gateways, coordinators) may not support newer device features.

Which one costs less?  This is the hardest question to answer.  I have been requesting quotes and searching for development kits.  Zigbee is by far the easiest to find.  The price of Zigbee dev kits ranges between $1,000-$5,500.  The Zigbee code has various names, depending on the dev kit vendor.  But, whatever “stack” you end up with, you will need IAR Workbench to compile it.  IAR Workbench is an IDE, and a license for it costs between $2,500 and $3,500, depending on type.  There are situations where you will not need to modify the Zigbee stack.  However, experienced users told me that they were told the same thing, and found they had to do it, anyway.  Finally, Zigbee would really like it if you had your product certified.  The cost of this certification is rather elusive, until it’s time to have it done.

Getting started with Zigbee will cost you $3,500-$9,000, depending on which dev kit you choose and the license type of IAR Workbench.

Z-wave was a bit harder to find.  Z-wave is proprietary: only one company makes the SoC’s, defines the standard, certifies devices, etc.  That company is Sigma Designs.  The way I see it, Sigma is taking the “Apple approach” to the Home Automation Network Protocol game.  An all-inclusive dev kit costs about $3000.  This kit is actually two parts: The main developer’s kit full of dev boards with SoCs.  The second part is a regional radio kit.  In short, the Z-wave radio for US products won’t work with Z-wave radios for products intended for other markets around the globe.  Furthermore, the Z-wave protocol also requires a specific IDE: Keil’s PK51 for 8051 chips.  The only quote I have on this IDE so far is for $1,320.  Finally, Z-wave requires certification before your device can be marketed.  Certification costs between $1,000 and $3,000, depending on product complexity.

Getting started with Z-wave will cost $4,320.

BUT WAIT!  There’s more!  Atmel has released several eval boards for Zigbee that are peculiarly cheap compared to the competition.  Furthermore, Atmel’s free Atmel Studio 6 is purportedly able to compile Atmel’s proprietary Zigbee Stack (called BitCloud).  I read through the data sheets, the user manuals, etc. and it all just seems too good to be true.  I’m not sure if their offering is on the same level as the more expensive dev kits, or if it’s more on the level of XBee (a standalone Zigbee chip intended for use by hobbyists.)  Still, I am intrigued.

Here is a listing of the dev kits I researched.  There are actually more Zigbee kits available.  But, by the time I hit Freescale’s offering I was suffering from analysis paralysis and decided to stop my search.  Personally, I’m leaning toward the TI kit, but I’m biased.

Advertisements

The Football That Can’t Not Be Caught

This might sound crazy, but I’ve been pursuing a Bachelor’s degree for about six years.  What’s crazier is that I’m only half way done.  I can explain:  I go part time, and I needed a lot of pre-requisites that weren’t included in my previous degree.

What’s possibly even more crazy is that I have had ideas for my senior project since before I even started the program.  I am here today to discuss one of those ideas.  When I first had this idea, I had only a cursory knowledge of the technical details.  Furthermore, I knew it was possible, just not what all it entailed.

If you read the title, you know the idea.  Perhaps it’s about time I just said what it was.  …  I could just keep referring to it as “it” and prolong the anticipation for the big reveal.  That is, of course, assuming you have not read the title of this post.

The idea is guided artillery.  Not a new concept.  However, most guided artillery is designed with one major criteria that we will be leaving out of this project:

Traditional Guided Artillery Design Criteria

  1. Kill People

However, my idea still involves guiding a projectile at a person (in theory).  However, this projectile would not explode on impact.  It wouldn’t explode at some proximity.  It wouldn’t explode at all.  In fact, it would be soft, and grippy…  Like a Nerf football.

So, let’s list the desired end-user functionality of this guided artillery football:

Toy Guided Artillery Design Criteria

  1. Be throwable, like a football.
  2. Be catchable, like a football.
  3. Change direction during flight so as to minimize the distance between itself and some designated target.

Numbers one and two seem simple enough.  There might be some off-the-shelf items that could cover those bases.  I don’t know, maybe I’m underestimating the difficulty of those two criteria.

Number three is the real challenge.  How the hell do you make a football guide itself to a target?  The first problem that comes to my mind is that the ball’s flight time is limited by who threw or kicked it.  That greatly limits its ability to reach a target.  Secondly, the ball has no control surfaces with which it could change its direction during flight.  Along those same lines, it requires a lot of practice to develop the skill required to throw a stable-flying football.  Finally, and probably the worst part, it takes A LOT of fast-moving data and sensing to fly anything toward a target.

So, let’s split this big problem up into little bite-size, chewy pieces.

  1. The Guidance System
    1. Navigation – “Where am I?”
    2. Guidance – “Where am I going?”
    3. Control – “We’re in the pipe, 5 by 5!”
  2. Flight
    1. Stabilization
    2. Control Surfaces

The guidance system may be the most complex part of the project.  However, flight isn’t easy.  Flight is mostly a mechanical issue.  And, in this case, the control surfaces will be totally experimental.  Therefore, the flight surfaces cannot be trusted.  Stabilization is easy enough, though.  We just need to stick some fins on the back.  Or a long, flowing tail.

The control surfaces themselves are rather difficult.  In order for the football to be throwable and catchable, the control surfaces need to be discrete.  In other words, they need to be hidden away until they’re needed.  Furthermore, typical control surfaces on an air plane take advantage of the large lift-generating wing in front of them to alter the lift output.  The football will have small stabilizer wings, if any wings at all.  That means that it will not generate its own lift.  Therefore, the control surfaces are actually going to be simple air brakes.

Now that we have the “simple” part out of the way, let’s move on to the complex part.  The guidance system needs to answer two questions continously:  1. “Where am I?” and 2. “Where am I going?”  The answers to these questions will determine which control surfaces deploy and at what time.  However, answering these questions is potentially very difficult.

“And you may ask yourself

Well, how did I get here?”

I propose that the guidance system use an inertial measurement unit and a fixed-point reference (the starting location) to determine where it is.  To do this, the football will require an accessory:  A throwing glove.  The throwing glove will have magnets embedded at strategic locations along the gripping surfaces.  The ball will have hall effect sensors in close proximity to its outer surface.  In this way, the ball will know when it has left the thrower’s hand.

Next, the ball will have an accelerometer and possibly a gyroscope.  Many accelerometers are now capable of measuring the constant pull of gravity and comparing it to the X, Y, and Z axis.  When the hall effect sensors detect that the ball has left the thrower’s glove, and by measuring the change in acceleration on 3 axes and comparing them to the acceleration of gravity, the ball will know approximately its location relative to its starting point.

But, how does it know where to go?  I’ve been putting a lot of thought into this.  The best I have right now is, “I don’t know.”  I don’t know, because of the following reasons:

Reasons I Don’t Know

  1. The form factor limits the complexity and fragility of on board sensors.
  2. The form factor also limits the accuracy of on board sensors, because the ball cannot be expect to be precisely stable.  In fact it may intentionally be spun about its axis to gain distance.

Anyway, while writing those last two sentences, I had an idea aside from all the commonplace ideas (cameras, infrared, radar, GPS, etc).  However, this idea greatly limits the “fun factor” of the concept.  Although, it does use the existing hardware and adds only one more accessory:

THE CATCHER’S GLOVE.

The catcher’s glove will essentially just be another thrower’s glove.  It will be exactly the same.  The difference, though, is how it is used.  Before the ball is thrown, it must be told what mode it is in.  The first mode will be “target mode”, wherein the catcher makes contact with the ball.  While making contact with the catcher, the ball initializes all of its location variables.  Essentially, X=0, Y=0, Z=0.  Next, the ball is placed in “launch mode”.  In this mode, it is recording changes in its location and waiting to both make contact with the thrower’s glove AND to lose that contact.  Upon losing contact, the ball enters “flight mode”, wherein it tries to get all of the axial changes back to zero by actuating the control surfaces accordingly.

And that, my friends, is the football that can’t not be caught.

On Confidence and Experience

I learn by doing.  My family bought a Nintendo Entertainment System when I was maybe 4 or 5 years old.  I didn’t care to read the instructions.  I just plugged it in and started playing, bumping into everything and testing everything until something happened.  I recall Who Framed Roger Rabbit being particularly challenging.  That same methodology has been with me ever since.

NES-Console-Set

This is both good and bad.  I was validated last week by my Microprocessor Apps professor.  A student asked him a “what if” question regarding a program in assembly.  He took it as an opportunity to show a few different approaches.  Of course, we found limitations in the platform this way.  We were bumping into everything.  Then, finally, after re-writing the code several times, we found a method that worked.  He then told us, “this is the process of engineering.”  You cannot be afraid of the many, many failures you will have.  You just have to keep pressing until you figure out what works.  In this way, my methodology is good.

Now, let me finish the rest of my original story.  I never finish a video game, except for a few.  This is because I get bored after I’ve bumped into the things and tested the buttons.  I don’t really care about the story.  I don’t marvel at the graphics.  (Let it be known, my favorite game of all time has 16-bit graphics.)  I just lose interest.  This is the bad part of my methodology.  Whether successful or not, when a project is nearing its logical end, I begin to loathe it.  I want to move on to something new.

Sometimes, though, I enjoy something so much that I’m inspired to improve upon it, or recreate it in my own way.  So, I started learning to program in Visual Basic around the age of 10.  Of course, my interest waned when I bumped into the limitations of my abilities and creativity.  At one point I managed to install Linux.  That forced me to purchase “C++ for Dummies”.  A few years later, Macromedia Flash was blowing up on the internet.  I picked it up and started learning the integrated scripting language ActionScript.  But, I spent most of that time animating instead of programming.  Toward the end of high school, I had the opportunity to take programming classes.  I honestly don’t remember all of the languages I studied.  I know there were at least three, but I only recall Visual Basic and Pascal.  (The reason I remember them is because the classes ended before lunch and most of us stayed in the room and played Quake 1 on the school’s LAN.)  Through all of this, I tried and tried to recreate my favorite video games using various languages.  Very few were successful.

I wasn’t until I was much older and much less busy that I finally programmed and published a full, working game.  Unfortunately, it was a Flash game.  Therefore, it enjoyed very limited success and very limited exposure.  It was also kind-of awful.  Still, I made money from it.  For the first time in my life, I had earned money for programming.  I immediately began working on an improved version.

Unfortunately, life got in the way.  My wife and I moved to a different city.  I started commuting long distances.  She lost her job.  We went into debt.  She regained a job.  We paid off the debt.  We coasted.  Then we both lost jobs.  Then we both got new jobs.  My commuting was greatly reduced.  Then, I picked up the game making again.

Unfortunately, at this time, Flash was going through transition.  Adobe had bought Macromedia and was redesigning ActionScript to be much more like java, in an attempt to make it more powerful.  This was a problem for me, because I didn’t have time to learn a new language.  (ActionScript 3.0 used an entirely new syntax.)  So, I used the older syntax that I was familiar with.  It was slower and less capable, so I had to learn how to optimize my code.  The scope of the new game was ridiculously complex relative to the scope of the first one.  I did it anyway.  I spent a few weeks making my own path-finding algorithm.  It wasn’t great, but it sort-of worked.  Then I spent a day implementing A* path-finding.  Both experiences taught me a great deal.  Then, school began to get tough.  So, the programming slowed to a crawl.

In short, I invested a lot of time and energy into a scripting language that ultimately failed.  (I know that Flash is still widely used, but it is rapidly being replaced by mobile apps and HTML5.)  Still, I learned a lot about programming during this time.  Beside algorithms and optimization, I learned that you can make money even if your product sucks.

Still, none of this gave me the confidence to take a job as a programmer.  Knowing the syntax of several languages doesn’t give you confidence.  Knowing the limitations doesn’t give you confidence.  Tiny “successes” don’t give you confidence.  What gives you confidence is experience.  What gives you experience is work.  There are only two kinds of people who can give you work:  Those willing to take a chance on you, and yourself.

I have been fortunate in life.  Several people have been willing to take a chance on me.  I expressed interest in doing what they do, and they wanted to teach me.  But, they didn’t want to sit me in a lecture hall and profess the why and what-for of what they did.  They just wanted me to do it and learn as I went.  The first big opportunity like this that I snatched was at a defense contractor.  I had been working as an electromechanical assembler for about a year.  I had been put on hot projects that were halfway between development and launch.  Therefore, I had a lot of interaction with the engineers.  This interaction convinced me that I wanted to be an engineer.  I expressed interest to the right person at the right time, and was given an entry-level position.  I had only an Associates degree.

So, then I was part of manufacturing engineering.  If you’re unfamiliar with the various engineering fields, manufacturing engineers take design engineers’ drawings and turn them into real things.  Manufacturing engineers are tooling and process developers.  They procure and/or develop the tools for the job, they design manufacturing processes, and they train staff to actually perform the processes.  My entry-level job was essentially a support position for those engineers.  They let me play with all the new equipment.  Sometimes they let me develop tools.  A lot of the time they let me write procedures.  I wasn’t always successful.  Not everything I did was wonderful or exceptional.  Still, all of this experience gave me confidence that I could figure out most of the problems placed in front of me.  I was also fortunate to have a great mentor.  He seemed to know that I learned things the hard way, and wasn’t afraid to let me do it.  Of course, he offered guidance and support when needed.  It was invaluable.

Defense contracting began declining during the Great Recession. (See page 26)  There were massive layoffs.  I was caught in them, because my position was not an essential part of the process.  I wasn’t mad.  In all honesty, I was relieved.  I had been commuting for 2 hours every day.  This was an opportunity to get a job much closer to home and have more time for other things.

The confidence I had gained at that job lead me to the next one.  I actually interviewed for a manufacturing engineering position, but asked for too much money.  They told me that during the interview.  Fortunately for me, my resume was passed around to other departments.  It landed on the Director of R&D’s desk, and he called me in.  He wanted me as an intern, to do CAD and a little bit of design work.  Again, I was fortunate that someone wanted to take a chance on me.  I had no CAD experience, but I had read and interpreted dozens of drawings.  I was computer savvy.  I was studying engineering at a local university.  But, I asked for too much money again.  Maybe my confidence was a little too high?

It worked anyway.  I’ve now been there for nearly three years.  I went from the CAD internship to full-on product design.  Like I said, I’ve been fortunate.  Other than my direct manager, other people at the company have been willing to take a chance on me.  They’ve given me many opportunities to learn and grow.  It has been amazing.  I feel super-confident in my CAD abilities, which I learned entirely at this job under the supervision of several great mentors.

So, what’s the point?  Wasn’t I talking about programming earlier?  Yes.  I was.  Take note:  Both of those jobs required 40 hours per week.  I’ve been employed in these types of positions for a total of 6 years, now.  That’s roughly 12,000 hours of combined manufacturing/design experience.  Meanwhile, I’ve been programming on-and-off in various languages over a much longer period, but with far less consistency.  That self-teaching experience hasn’t given me the same confidence as my work experience.  That is in spite of the fact that I’ve been self-teaching for much longer.  But, in self-teaching, there is no consequence for absolute failure.  There is nothing to deliver.  There are no deadlines.  These are the reasons that self-teaching is bad.  However, self-teaching is also good because it allows you to explore and experiment.  You don’t fear failure, because the only failure in self-teaching is failing to learn something new.  You’re not afraid to take risks when self-teaching.

So, what’s next?  I’m studying Computer Engineering.  I want to design embedded systems.  I want to be an entrepreneur.  I want to know enough about the technical side of things that I can reasonably identify and ally myself with really talented, intelligent people in those fields.  I want to solve a problem that a lot of people need solved.  I want to tell them that I can solve it for them.  And, that brings me to the next big hurdle, after Computer Engineering: Communication.

Communication has always been a struggle for me.  I think it’s a struggle for a lot of people, actually.  By that, I mean that some people are really terrible communicators even though they speak often.  Still, some people are really terrible communicators because they speak so little.  That’s because communication is an art.  It requires practice (experience) to gain confidence.  I’ve had several opportunities in my career to communicate ideas and concepts to small groups of people.  If my programming and design experience is a plate of enchiladas, then my communication experience is a tiny dollop of sour cream.

Since my work doesn’t often require that I speak to groups, I have to self-teach.  This is good, because I can take risks, I set my own deadlines, and I explore a lot of different ideas.  Hence, I’m writing this blog.  This is how I’m building confidence in communicating.  I learn by doing.

MC68HC11

I’m enrolled in Microprocessor Applications this term.  The MC68HC11 is what we are using to learn on.  It’s an old chip.  Very old.  Motorola released it in 1985.  That was the year I was born.  As of this publication, I am 29 years old, nearly 30.  Most of the people in my class weren’t even born when this chip hit the market.  Ouch.

20150109_204136The HC11 was originally intended to be used in mobile applications!  When I say ‘mobile’, I mean ‘cars’.  It’s actually pretty feature-packed.  It has 38 GPIOs (16 bi-directional, 11 input-only, and 11 output-only).  Onboard RAM, ROM, and the ability to access external ROM up to 64kb.  You read that right: 64kb.  It’s an 8-bit processor, and the crystal on the dev board is 8Mhz.  For comparison, Intel released the i386 32bit, 12Mhz processor in 1985 as well.  The i386 was top-of-the-line at the time.  Meanwhile, the HC11 was being implemented in engine control units.

In so many words, my professor told us that we should “crawl before we can walk.”  Thus, we begin our journey into microprocessor programming with the HC11.  Meanwhile, as I mentioned in an earlier post, I have an MSP430 that I bought LAST YEAR around this time.  (Winter break has always been a mad rush of ideas and project starts.  And, then all of that goes out the window when school starts again.)  The MSP430 is a Texas Instruments product.  You can get a starter dev board for about $10 here.  It comes with everything you need to begin learning.  But, the $10 version gets you a fairly featureless chip.  Meanwhile, the HC11 dev board costs $112.  And, the book for the class costs about $350 new.

TI has a proprietary IDE that lets you program in C.  They also have an Arduino-style IDE that lets you program in psuedo-C.  If you’ve never done any serious programming before, learn the psuedo-C and plan on transitioning to C later.  But, if you’ve got some experience and/or you want to learn the nitty-gritty of what you’re doing, use C.  For our Microprocessor Applications class, we’re learning to program in Assembly.

I anticipate it will be difficult, but not impossible.  We’ll see.