The Leasing Game

There are people in this world who hate leasing vehicles.  The reasons vary.  My favorite one is that leasing leaves you with nothing to show for the money you’ve spent.  Then comes the expense.  You’re responsible for maintenance.  You spend all this money on a car that isn’t even yours.  You pay insurance premiums that will ultimately pay out to the leasing company in the event of an accident.  They also say that you’re paying for the worst depreciation on the car — when it is first released into the wild.

I can’t deny the truths behind these arguments.  Leasing is kind-of an odd thing.  You really do insure a vehicle that you don’t own.  You really do have to maintain it on your own dime (although, a warranty will cover everything else.)  You really do have to give it back at the end of the lease term.  So, what’s the up-side?  That depends.

A car lease is a contract wherein the lessee (you) agrees to pay money for a given term in exchange for the use of a vehicle.  The amount of money you pay is determined by the purchase price of the vehicle less the residual value after the lease term.  The residual value is how much the lesser (the people leasing you the car) think the car will be worth after you have used it.  This value seems arbitrary, but it is actually based on several obvious factors: mileage, wear and tear, “gap insurance”, and supply and demand.

Mileage is the most obvious variable in the residual value equation.  A lease always includes very strict mileage allowances.  For every mile over that allowance the car has traveled, you pay extra.  So, right here — right at this very sentence — if you drive more than 12,000 miles in a year, you shouldn’t lease a vehicle.  Honestly, you shouldn’t be driving 12,000 miles a year.  Driving is a total waste of time and energy on your part.  It’s expensive.  It’s boring.  It’s dangerous.  Don’t do it.  If you drive only 3,000 miles per year, you may want to consider purchasing.  Why?  Because you won’t use the car as much as you could have, and you will be paying for utility that you do not utilize.  Just buy a car and plan on keeping it forever.

Wear and tear is also pretty obvious.  The dealer expects the car to need some work when it comes back.  This work will be minor, but it still has a cost.  That is factored in to the residual value.  After a 36,000 mile lease, the car will likely be out-of-warranty, but still very mechanically sound.  This is great for the used-car market, where “clean”, low-mileage cars actually go at a premium.  This relates to the supply and demand factor.

I use quotations on “Gap Insurance”, because the actual contract isn’t technically gap insurance.  There is a risk that a certain number of lessees will total the cars they lease.  The dealer factors that risk (and the cost of that event) into the lease.  Meanwhile, you pay regular car insurance that covers smaller accidents, medical bills, etc.

Finally, and the least obvious of the factors, is supply and demand.  The worst thing about this factor is that it is constantly changing.  The residual value of a vehicle is largely influenced by the greater used car market.  The used car market is supplied by off-lease vehicles and trade-ins.  Car lease terms range from 24-48 months, with the most common advertised term being 36 months.  Car loan terms have taken on much longer terms in recent years, as long as 72 months.  In fact, In 2010 nearly 1/3 of car buyers financed using a 6-year (72 month) car loan.

What do you know about car loans, or just loans in general?  I’m going to take a minute to explain them.  If you already know, feel free to skip ahead.  Most people know the basics of loans.  An interest rate and a term is advertised.  The interest rate is applied to the loan balance repeatedly over the life of the loan.  In many cases, the interest rate is applied monthly.  The other advertised value is the principle, or sale price, of the vehicle.  You must understand that both the interest rate AND THE TERM determine how much extra money you pay.  So often people focus on the interest rate.  The term can have as much or more influence on the total interest charged.  Furthermore, you must understand that loan payments are biased toward the balance of the interest, and gradually change bias towards the balance of the principle (the car’s sale price)

Let me try to put that in a concrete example, without getting too mathy.  Let’s say you want to finance (not lease) a $10,000 car (forget taxes for now).  The interest rate is 6.00%  The term is 36 months.  Over those 36 months you will pay $1996.81 in interest.  Great.  When you’re done paying for the car, it actually cost you $11996.81.  And, your monthly payment will likely be $333.  That’s pretty steep for a $10,000 car!

Now, let’s say you want to finance that same $10,000 with an interest rate of 3% for 72 months (6 years).  That interest rate is really attractive, right?  I mean, it’s HALF of the other interest rate.  But, the term has doubled.  Over those six years, you will pay $1969.48 in interest, and the car will have cost you $11969.48 total.  Now, your payment is only $166!  How wonderful!

But, is it wonderful?  Now that you’ve seen compounding interest in action, you need to understand depreciation.  Accountants learn that word in college, and all it means is “things lose value.”  Cars, houses, mobile homes, your computer, your tablet, your phone…  Everything you have ever bought becomes less valuable after you buy it.  They all take on wear and tear.  They all become “old”.  There is no mathematical equation that predicts depreciation in the way that we can predict interest.  The car’s value is determined by whoever is willing to buy it, and the amount they are willing to buy it for.  Car dealers buy used cars all the time, so they often have a big say in how much value a used car has.

Now, as soon as you drive your brand new $10,000 car off the lot, it has lost value.  It doesn’t lose value gradually, the value drops FAST, and then gradually sinks as the car ages.  If you owe more money on the car than what someone else is willing to pay you for it, you are “upside down” in your loan.  In other words, if you want out of the loan, you need to come up with not only the value of the car, but also the difference between that value and the balance of your loan.  If you’ve only made two payments on your $10,000 car financed for 36 months, and someone is only willing to pay you $8,000 for it, you need to come up with $1333 to cover the difference.  However, if you financed that same car for 72 months and you get the same offer, you need to come up with $1668.  Why?  Because your monthly payment for a 72 month term is only half of what it was for the 36 month term.  (36 month term payment was $333/mo, 72 month term payment was $166/mo.)

I’m trying to paint a picture here that shows you why financing anything is a terrible idea.

To make it even worse, loans are designed such that you pay more towards the interest at the beginning, and more towards the principle toward the end.  For example, 90% of your first payment goes towards the interest you own, and 10% goes toward the principle.  These rates gradually swap over the loan term, until your last payment is 10% interest and 90% principle.  What does that mean?  Well, number one, it means that the finance people want their interest first and foremost.  And, number two, it means that you actually aren’t paying much toward the car’s actual cost.  It’s another way to keep you upside down in the loan and to keep you from selling the car before you’re done paying the interest (which is where the bankers make their profits).

Jeez.  This is a long post.  Go grab a cup of coffee or tea and meet me back here.  I’m about to start talking about leasing.

So, we’ve established that long financing terms are bad, even at lower interest rates.  We’ve described how loans are set up to keep you from being able to sell the car before the loan term is over.  (And, at 6 years, you’re probably stuck with a car that is falling apart.)  We’ve shown that a low payment is both a blessing and curse.

Now, on to leasing!  LEASING IS STILL A FORM OF FINANCING.  You pay interest on a lease, but it has a different name.  This is the “money factor”, and it is basically an interest rate.  So, yes, leasing is just as bad as financing.  But, when you lease, you agree to pay for only the portion of the car that you use.  No, you don’t only pay for the driver’s seat and the steering wheel.  You pay for the car’s loss of value.  So, if that $10,000 car is only worth $6000 after a 36 month lease, you have only payed $4000.  $4000 divided over those 36 months is an $111.11 per month payment (not including the interest).  That’s ridiculously low, right?  I was just talking about how a low payment was a bad thing.  Well, in this case it still is.  The car is still losing value as you drive it.  More on that later.

Most lease advertisements include a “down payment” amount.  It’s a payment that goes directly toward the price of the car, and often times includes a sort of security deposit.  The portion that goes directly to the car is basically a pre-paid amount toward the lease.  This is good and bad.  Paying money down brings the monthly cost down.  For example, if you put $1000 down payment on your $10,000 car lease (36 month term), your payment drops to $83.33 (not including interest).  Holy cow!  That’s all fine and good.  But, what if you’re driving along and some joker runs a red light.  He hits your brand new leased car that you just drove off the lot and totals it.  You walk away form the accident unscathed, but you’re now carless.  You know what else you are?  You’re out $1000.  That’s right.  Your down payment gets carried away on a flatbed truck.  That’s why my advice is to never, ever put a down payment on a lease.  But, I don’t trust other drivers at all.

So, is that all there is to leasing?  Yeah, basically.  But, no, there’s much more.  You should be aware that car dealers lease cars from each other all the time.  In fact, they lease cars and then they sell them.  Sounds shady, right?  It’s not.  Basically, they pay a monthly payment to keep the car on their lot.  When the car sells, they pay off the lease balance and pocket the difference between the sale price and that lease balance.  The best part of this is that you can do it, too.

Car dealers don’t want you to do it, though.  They want you to pay the lease payment every month (because you’re paying interest) and then turn the car back in when you’re done.  The problem with this is that they will change the tires, give it a fresh coat of turtle wax, and then re-sell it for a much higher price than the residual value.  But, hey, that’s the business.

Remember, though, that cars depreciate faster than most people can make payments toward their financing.  A lease works in the same way.  But, because you’re only paying for a portion of the car’s overall price, and because you’re guaranteed a certain residual value at the end of the lease, you have the opportunity to out-pace the depreciation.  In fact, a 36 month lease can be set up in such a way that you beat the depreciation around the 24 month mark.  Now, understand, at 24 months you may only be breaking even with the car’s resale value.  And, also understand that you may have to sell the car privately in order to break even.  (This is because private sales are higher than dealer trade-ins.)  However, many dealerships offer incentives if you defect from one brand to another.  For example, if you take your Honda to a Ford dealership, Ford sometimes offers an incentive for first-time Ford buyers to trade in their other-brand cars.

All of these things can help you beat the depreciation on the car, and you can escape a 36 month lease after only 24 (or so) months.

HOWEVER, and this is a big deal, you must have negotiated a very good lease contract to begin with.  A dealer can AND WILL overcharge you for both leasing and finance.  You must go to the dealership knowing how much you plan to pay, and be willing to walk away from the deal.  This is another reason to not wait for the lease term to end before getting rid of the car.  If you have no ride home, the dealer has leverage on his side.  Are you gonna take a taxi home?

The internet is chock-full of car lease calculators.  Most manufacturer websites also list the terms of the lease contracts in the “incentives” and “offers” section.  What you need to do is go to these sites and look for the lease offers for the car you want.  Find what the sale price and the residual value are (IN THE FINE PRINT) and plug those into a lease calculator.  The calculators typically give you a money factor, and the manufacturers often hide that.  Calculate a lease payment, with or without down payment and trade-in value, and then have that number burned into your brain when you go to negotiate the deal.  (Also, double check your number.  Make sure you have input all the information.  Check your calculation against the lease offers to see how realistic it is.)

If they bring you a quotation that is wildly out of proportion with your calculated number, you need to get angry with them.  Or, laugh in their face.  Get them as close to your number as possible.  I had a Ford dealer bring me a quotation of $300/mo for a Ford Fiesta!  I knew from my research that I could get the car for $200/mo easily.  And, $200/mo would be a fair deal to us and to them.  We had already told the guy what we were willing to pay, and he insulted us with his ridiculous offer.  We insisted that our number was fair, and that if he came back with anything different that the deal was off.  Sure enough, they came back at $218 (which included tax of $15/mo).

Now, the problem with this strategy all comes back to the supply and demand factor.  As I said before, the used car market is supplied by off-lease vehicles.  When more people lease vehicles, more vehicles flood the used card market.  When there are a lot of used vehicles for sale, their prices all come down.  Why?  Because supply is higher than demand.  They become harder to sell when you have too many.  Furthermore, when people end a lease, many of them start a new lease.  And, as more people move into leasing instead of buying, used car sales drop.  How does this affect leasing?  Well, as used car prices drop because of oversupply, residual values on leases also drop.  Therefore, leases become more expensive.  Furthermore, as demand for leases increases, car dealers start to charge more.  So, leasing becomes even MORE expensive.  Eventually, these supply and demand forces hit another tipping point, and it all goes in reverse.  Where are we right now, as of mid-2015?  We’re moving toward an over-supply of used cars and more popular leases.  The leasing game may be getting harder to play.

And, when that happens, it becomes more attractive to purchase lightly used car.

The Cloud is Bad for Business

I just read that the Vice President of Yamaha America is moving his business’s data servers offsite and into the cloud.  This is a cost-savings effort that replaces a server leasing strategy.  He’s managed to move the entire company into the cloud save for three areas: Enterprise Resource Planning, the phone system, and employees’ files.  But, he wants to move all of that into the cloud, too.

And that’s where we address this post’s title.  The Cloud is bad for business.  Enterprise Resource Planning applications are the link between a company’s operations and their accounting department.  In short, the ERP system keeps real-time account of all assets (human and otherwise) flowing in to, out of, and through a company.  It knows when assets are due to arrive.  It knows cash inflows and outflows.  It knows the value of items arriving at and leaving from the dock.  You could say that the ERP system is a real-time digital copy of the company.  If you were inclined to steal high-value items, or divert funds to yourself in a flurry of business transactions, or do any number of heinous things to a company, you would want access to the ERP system.  It would tell you where, when, and how to make your move.

The Cloud isn’t proven to be secure, nor reliable.  Putting the ERP system in the Cloud is the dumbest, most naive thing I have ever heard.  You might as well just post your company’s general ledger on the front door and let passersby make entries.  Typically, that information is even kept away from most employees.  If you’re in a highly competitive industry, you might as well just tell all of your competitors who your best suppliers are, and how much you pay them for their goods and services.  All of this data is now floating out there in the cloud, accessible remotely by anyone persistent enough to crack a couple of passwords.  Remember the fappening.  (Don’t google that at work.)

So, there’s that.  The next terrible thing about the Cloud is that while you have eliminated in-house IT support and maintenance, you have added a lynchpin to your system that would not otherwise be required.  Your company’s ISP now controls your entire system.  If you do not have internet access, or if your internet access is throttled (I’m looking at you, Net Neutrality), you are losing gobs of money by the millisecond.  As it stands now, if I lose internet access at work, I can still do my work because the data is local.  I may not have the convenience of googling for information, but I can still access all of my models, shared documents stored on the server, etc.  In other words, I’m only slightly slowed by a loss of internet access, not maimed.

Furthermore, the Cloud may not actually take off.  Sure, just about every software company is pushing it.  That’s because they see dollar signs in essentially holding your data hostage, and forcing you to pay for their software in perpetuity.  Adobe Creative Suite is now Cloud-based.  That means you pay monthly the use an always-updated version of any Adobe software.  The data you create stays on their Cloud servers.  But, what if users reject it?  What if this whole Cloud idea flops?  Will Adobe give that data back before shutting down the Cloud servers?  How much warning will be given, if any?

Why buy a stand-alone software package that you will have to upgrade annually when you could just pay monthly for an always-updated Cloud version?  Well, because some people don’t want to upgrade EVER.  That’s right.  Some people just want to buy software once and then use it until the technology is obsolete.  Furthermore, paying monthly really only has value if you use the software all the time.  At work, I have a license for Adobe CS and I honestly haven’t used the software in 6 months or more.  Seriously.  Does it matter?  No.  When I need it, I have it.  I will always have it.  A year ago I used it as much as I used any other software on my computer.  That may happen again in the future.  If it does, I’m prepared.

Finally, the laws pertaining to the Cloud are in their infancy, or not even yet conceived.  Do you own the data that you upload to the Cloud?  Which jurisdiction’s laws apply to that data?  The former can be stipulated in your service contract, but the latter is more ambiguous.  The truth is, no one knows, yet.  The Cloud is so new that there aren’t any legal precedents.  The data is accessible from anywhere in the world.  The data may be stored in multiple states or even countries.

So, what is the Cloud good for?  It’s good for sharing low-value, non-sensitive data.  Got a giant PDF that your e-mail server refuses to send?  Use the Cloud.  Got some schoolwork that you need to access from multiple machines?  Use the Cloud.  Got a zip file full of cat pictures that you really need to send to your Grandma?  Use the Cloud.  Got all of your company’s sensitive, vital information?  Keep it on your own servers.

How to build a true Solidworks Workstation for about $300.

I’ve written before about building the Cheapest Solidworks Workstation.  Things have changed a lot since then.  Firstly, my budget has increased slightly.  Secondly, my knowledge has increased greatly.  In that post I claimed that modern Intel i5 and AMD FX chips could handle Solidworks.  They will, but not in the best way.

In another post, I describe some things I learned about error-checking components and the “big picture” concept behind true workstations.  In short, if you’re serious about using Solidworks (or some other machine-crushing software), you need a true workstation.  The good news is that it can be had for a reasonable amount of money.  We’re talking $300-$600 for something that won’t disappoint.

The bad news is that you’re going to have to shop, and you may have to get your hands dirty.  I’m going to share with you the “Workstation Algorithm”, wherein we try to get the best performing, true Workstation for the least cost (not including the cost of your time).  If you’re like me, you spend a lot of time looking before you leap.  If you’re REALLY like me, you look and look and finally choose not to leap at all.

Let’s leap, though.  I’m serious about building my own rig and doing CAD work freelance.  I’m even serious about helping other people get CAD rigs.  CAD is the future.  3D printing is here, and it’s getting ever better and ever cheaper.  I hope my kids will be designing their own toys.  It’s totally possible.

First off, you’re going to need some shopping music.  Might I recommend this Pandora station, or perhaps this album?  (I just did.)

Second, you’re going to need to understand what we’re looking for:

Shopping List
Processors/Cooling System  (Notice the plurality?  You need more than one.)
Dual Socket Motherboard
ECC Memory
Graphics Card
Power Supply

I’ll leave the monitor/keyboard/mouse situation up to you.  You know what you like, and you can find used monitors for reasonable prices at several places online.

Now, there are multiple approaches to this endeavor.  We’re going to take the hard road first, because we’re math people and math people always do things the hard way first and then learn the shortcuts that make them say to themselves, “WHY DID I JUST SUFFER THROUGH ALL OF THAT HARD WORK WHEN I COULD HAVE JUST _______?”  (Everyone does that, right?)

Approach 1: Build your workstation from piece parts!

Alright, I know that some of you are scrolling right past this section.  You might just want the easy way out, or you might have a fear of tinkering with computers.  No matter how you go about this, to get the best workstation for the lowest price, you’re going to have to open a computer case and replace some things.  Later on, we’ll buy an existing workstation and upgrade it to “modern standards of performance.”  Now, if you can’t handle that, feel free to bail on this whole project.  It’s cool.  Just go.  You don’t want it bad enough.


If you read the other posts about this project, you know that you need processors that know what to do with error-checking RAM.  In other words, you know that you need actual workstation/server processors.  Intel makes the Xeon line, and AMD makes the Opteron line.  I’ve already abandoned AMD, because I found it difficult to find Operton processors with the performance I needed at the price I wanted.

So, in this post we’re basically only going to shop for Xeon processors.  First, open up  You’re looking at a list of benchmarked systems running multiple processors.  At the top of the list, you’ll see the latest and greatest processors smashing through benchmarks with ratings somewhere around 30,000.  You can’t afford these processors.  On the right side of the list you’ll notice the prices.  The price shown is the total for two processors.

As a starting point, the workstation I use at work ranks around 8,000 on the benchmark list.  It runs Solidworks 2014 while I have tons of other stuff going on.  I max out all of the cores when I render, and rendering takes a while.  So, start your search for a set of processors around the 8,000 range.  You’ll quickly notice that even some of these are expensive as hell.  Don’t worry.  These processors are typically 3-5 years old, and can be found used for reasonable prices.  However, YOU ARE GOING TO HAVE TO DO SOME LEG WORK to find out what can be had for what price.

But, because I’m a pretty cool guy, who doesn’t afraid of anythin, I’ll do a little bit of the work for you.  The table below shows an ESTIMATED cost for 2ea of the processor described, the benchmark listed at, and the socket.  (All values are as of 5/31/15, and are not guaranteed to be accurate.)  The socket is important because it will determine what motherboard you can use.  Please be aware: You need a workstation motherboard, not a server motherboard.  This is important when you go shopping for motherboards, which is coming up next.  Just to jump ahead some: The X7350 at the top of the list uses Socket 604.  This is an old, old processor, and it supports old, old technology.  The RAM you need will be relatively slow, the motherboard will be hard to find, etc.  It looks like an amazing deal, but it’s likely to be more trouble than its worth.


You will need a cooling system for these processors!  Browse your favorite computer retailer’s website for Heatsink/fan combinations that are compatible with the processor’s socket.  Also, keep in mind form-factor.  Just for example, an LGA1366 heatsink with fan costs about $30.

Processor prices are round-about, and do not include heat sinks/cooling systems.

Processor Socket Price (for 2) Benchmark Benchmark/$ Link
X7350 @ 2.93GHz 604 $25.00 9,238 369.5
X5560 @ 2.80GHz LGA1366 $40.00 9515 237.9
X5550 @ 2.67GHz LGA1366 $40.00 9,233 230.8
E5620 @ 2.40GHz LGA1366 $40.00 8,286 207.2
E5540 @ 2.53GHz LGA1366 $40.00 8,079 202.0
X5570 @ 2.93GHz LGA1366 $55.90 9696 173.5
X5460 @ 3.16GHz LGA771 $60.00 8,158 136.0
E5640 @ 2.67GHz LGA1366 $80.00 8,897 111.2
X5647 @ 2.93GHz LGA1366 $100.00 10,132 101.3
E5630 @ 2.53GHz LGA1366 $90.00 8,666 96.3
W5590 @ 3.33GHz LGA1366 $120.00 10,646 88.7
E5649 @ 2.53GHz LGA1366 $130.00 10,709 82.4
L5640 @ 2.27GHz LGA1366 $133.98 10341 77.2
X5650 @ 2.67GHz LGA1366 $158.00 11687 74.0
X5482 @ 3.20GHz LGA771 $120.00 8,578 71.5
W5580 @ 3.20GHz LGA1366 $140.00 8,845 63.2
L5639 @ 2.13GHz LGA1366 $170.00 9,697 57.0
X5667 @ 3.07GHz LGA1366 $160.00 8,809 55.1
X5470 @ 3.33GHz LGA771 $160.00 8,651 54.1
L5638 @ 2.00GHz LGA1366 $180.00 8,930 49.6
X5492 @ 3.40GHz LGA771 $200.00 9,099 45.5
X7560 @ 2.27GHz LGA1567 $400.00 11,631 29.1
E5645 @ 2.40GHz LGA1366 $400.00 10,515 26.3
E5-2609 v2 @ 2.50GHz LGA2011 $450.00 8,705 19.3

2. Motherboard

So, you did some leg work and found a processor that is going to kick Solidworks in the pants.  You’re far from done.  The next step is to find a workstation motherboard that will actually take that processor.  As mentioned before the processor table, you need to take the Socket into account.  The sockets in the table in order of oldest to newest: 604, LGA771, LGA1366, LGA2011 (v1-v3).  Now, you need to make a decision.  It’s a tough one.  This motherboard is going to determine the overall performance of your entire system.

Older technology is just slower.  Moore’s law implies that processing power improves exponentially.  Topping out your budget will help ensure that the system you build is fortified against future performance demands.  The list of processors clearly shows newer technology that is slower than older technology.  Don’t let that fool you.  The newer, slower tech is low-end compared to the older, faster tech.  The main difference is that you could upgrade your Socket 2011 CPU later on to something that will make your top-end LGA1366 feel slow.  You wouldn’t be building a workstation if you weren’t planning on making money.  So, plan ahead.  It may be expensive and “slow” (relative to its benchmark neighbors), but in a year when you need more power, you could drop another $200-$400 and possibly double your power on the same rig.

The motherboard is also going to determine how much RAM you can cram in, how many video cards you can use, and how big those cards can be.  The processor also plays a part in the RAM, as the RAM’s speed will be limited by the processor’s bus speed.  Furthermore, DDR ram has gone through four generations of improvements.  Your processor/motherboard combination is going to determine which generation can you use.

So, let’s get on to shopping for motherboards.  Here’s what to search for: “Dual Socket XXXXXX Workstation Motherboard”.  Fill in “XXXXXX” with whatever socket your processor is.  Oh, hell, I’ll just do it for you:

Socket 604
Socket LGA771
Socket LGA1366
Socket LGA2011 (be careful here, LGA2011 went through 3 versions, and the -0, -1 and v3 processors require -0, -1, and v3 sockets, respectively.  Check your processor before committing!)

You’re going to be shocked by the cost of motherboards.  This is where you step back and say, “Well, maybe older tech is okay, because it’s still pretty fast and relatively cheap!”  And, you’re right to say that.  LGA1366 motherboards can be had for about $90.  If you chose the cheapest LGA1366 processors (with two $30 heat sinks), your total thus far would be about $190 and your benchmark rating could be about 9,000.  For comparison, a brand new AMD FX8350 costs about $165 alone (no motherboard), and has a benchmark rating of 8,982 at the time of this writing.  We’re winning!

ECC Memory

Alright, now for the kicker.  ECC Memory isn’t cheap.  It can also be kind-of hard to find.  You need to know the maximum bus speed that your chosen processor can handle.  You may have to google and dig for it.  You also need to know the pin count of the slots on your motherboard.  Since you’re support two processors, each one has a set of memory slots.  That implies that you need to supply memory in pairs, not just one big single stick.

You’ll be happy to know that it gets more complicated.  Some memory is buffered, and some is not buffered.  Some motherboards can take either.  You really only need buffered memory if you’re planning to use a ridiculous amount.  For example, some motherboard can handle 24GB of un-buffered memory on its own.  BUT!  If you use buffered memory, the motherboard could handle 96GB OF MEMORY.  Imagine needing that.  You won’t.  Don’t think too hard about it.

When you’re searching, many listings will say “Non-ECC”, which is both helpful and annoying as hell.  It’s annoying, because if you wanted non-ECC memory, you would just search for “memory” instead of “ECC memory”.  Anyway, just include “-Non-ECC” into the search and jog on.  Oh, you’re too lazy?  Argh!  Jeez!  Here, I’ll just do it for you again:

DDR2 ECC (old stuff)
DDR3 ECC (this is where you’re likely to find what you need)
DDR4 ECC (you won’t need this, and the price is ridiculous)

Remember, Solidworks is a memory hog.  You want between 12-16GB as a baseline.  That’s easily $120 worth of DDR3, which is almost as much as you’re spending on the processors and cooling.  But, it’s super important to ensure that everything runs smoothly, and you don’t waste time waiting for Solidworks to …work.  (Keeping tabs?  We’re at roughly $400 for a workstation at this point.  But, I said $300 in the title.  Don’t worry, that comes later.)

So, why do you need ECC memory again?  Because, you’re going to be using Solidworks, and you’re going to have clients.  The clients want the job done fast, and they want it done right.  You want to do some finite element analysis to make sure your parts don’t break and kill someone.  So, you set up a really thorough simulation and set it to run.  It runs… FOR HOURS.  So, you go to sleep.  A typical desktop processor/memory combination might come across some corrupt data and plow through it, crashing Solidworks, the simulation, maybe the entire computer.  You lose everything.  You have to restart the simulation, but will it fail again?  ECC memory helps prevent that.  It’s checking for corrupt data as the data is accessed.  It can even correct corrupt data is some instances.  You want this.  This will ultimately save you time and money, in the long run.  Just trust me.

Graphics Card(s)

You don’t need two graphics cards.  But, you do need a workstation-quality graphics card.  Remember the last paragraph the of the last section on ECC memory?  That’s why.  These cards aren’t meant for gaming, and they won’t do well on typical gaming benchmarks.  They’re meant heaving data around at incredible rates.  So, you’re already buying old technology, and the strategy here is the same.  Recall the very first blog post about cheap workstations:  Solidworks is CPU-heavy and GPU-light.  (I use the word “light” very loosely.)  The GPU will keep you moving through the complex geometry as its being continuously rendered, but the CPU is going to be doing the brunt of the work.  My suggestion:  Go to and look for the Quadro FX or AMD Firepro series in the 600-800 benchmark range.  What?  Are you kidding me?  You want me to list them for you?  …  Fine:

GPU Price Benchmark Benchmark/$ Link
Quadro FX 4600 $30.00 608 20.3
Quadro 600 $35.00 681 19.5
Quadro FX 3700 $35.00 639 18.3
Quadro FX 5600 $100.00 699 7.0

Surprisingly, they can be had very cheap.  The reason is because lots of companies lease their workstations and servers.  Then, when the lease is up, the stuff gets sold off to the highest bidder.  The market is flooded with these components, so their price is super low.  Honestly, you don’t need to over-think this aspect.  $35 for a graphics card is a steal, no matter which one you choose.

Power Supply

Workstations use a lot of power.  They use server CPUs, heavy-duty GPUs, heavy-duty ECC memory, and require a fair amount of cooling.  You need a power supply that matches.  To be safe, don’t get anything less than 800W.  But, to be safer, dig up the specs on the motherboard you’ve chosen and get the power supply specified by the manufacturer.  You can’t miss it.  This can be made even easier, which we’ll explore a little bit later.


Alright.  The ruse must end.  You’re going to start searching for workstation cases, and you’re going to find that they are rather hard to find (but not impossible).  This doesn’t make a lot of sense, because their innards are all over the place.  So, why not just buy an old, off-lease workstation and use the knowledge gained in the previous sections to perform upgrades?  This will be the easiest path to take.  It’s also probably cheaper, as no one will have to be paid to remove all those components from various systems.  But, what workstation to buy?

Modifying an off-lease Workstation

Well, you have only a few big suppliers of workstations.  HP, Dell, and Lenovo.  I’m biased toward HP, because that’s what I use at work.  HP offered the Z600 and Z800 with dual sockets.  I honestly don’t know what Dell and Lenovo offer, so I’m not going to pretend like I can point you in the right direction there.  However, the strategy here is to find a workstation with lackluster specs, and verify that it can be upgraded to meet your needs.  In the case of the Z600 and Z800, some of them only have one CPU, leaving one socket empty.  In some other cases, the existing CPU is worth keeping, and then you’re only on the hook for one CPU and the ECC memory.  The money saved can be spent upgrading the hard disk to SSD, which Solidworks will LOVE.

For example, I’m looking at an ebay listing for “HP Workstation Z800 1x Quad Core X5550 2.67GHz 8GB RAM 4x 250GB HDD FX1800 768MB”.  Its starting bid is $200.  The Xeon X5550 benchmarks at 5398/ The Quadro FX1800 benchmarks at 595.  Added another X5550 and heat sink for about $50, and you’re now benchmarking at 9,233.  Throw another 8GB of ram at the new processor for $50, and you’ve got your Solidworks rig for about $300, assuming no one bids the base unit up.

So, to get the absolute lowest price workstation that can REALLY handle Solidworks, you need to know some stuff.  The information I’ve provided is really just a road map.  There are lot of details you need to discover on your own in order to make this work.  The people who skipped this paragraph will run off and make some uninformed decisions, and possibly end up with components that don’t work together.  But, you won’t.  You’re diligent, persistent, and determined.  You’re going to take everything I said here with a grain of salt, and assume that I screwed up somewhere and told you something wrong.  (Don’t feel guilty, I’m going to assume that as well.)

Go forth, build a workstation, do CAD.

How do I get started in Home Automation development?

TL;DR: Here is a listing of Zigbee and Z-wave dev kits and IDEs with costs and LINKS!

I enrolled in a paid research position at school for the summer term.  Turns out that the research is very much related to the industry in which I work: Home Automation.  We aren’t actually researching home automation, but we’re using home automation systems to collect data.  The reasoning is that the systems are readily available, robust, and infinitely customizable.  The customization I’m talking about is on the back end.  Many sensors and things are available off the shelf and can be integrated into any network using the same protocol.  So, on the back end we can develop a system that operates on the collected data in whatever way we please.

The two primary communication protocols discussed here are Zigbee and Z-wave, the two most popular commercially available systems at the time of this writing.  The reason for focusing on these systems is threefold:

1. Cost.  The widespread acceptance of these standards helps to drive down costs.

2. Support.  The development kits, documentation, and expert support are all readily available.

3. Existing components.  Many off-the-shelf components already collect the data we want, and are integrable with any compliant network.

Network Topologies and Radios

Zigbee and Z-wave are actually standards that define network protocols, and companies simply build microprocessors and radios which meet these standards.  Before Zigbee and Z-wave really caught on, many companies were developing their own proprietary systems and radios.  Some of these proprietary systems were network based, others were very simple control schemes.  Generally, the radios used were around 900Mhz sending data at low baud rates (<=9600bps) for FFC compliance reasons.  The flavor of Zigbee that caught on commercially operates at 2.4Ghz and up to 250kps (depending on the implementation.)  The 2.4Ghz band is significant because it’s globally accepted.  WiFi uses the same band, so it’s rather crowded.

Zigbee creates a star network, where a central “coordinator” routes network traffic through the various network “nodes”.  A node is simply a device that has the ability to pass instructions back and forth from “end devices” to the coordinator.  End devices are really only intended to receive commands and report.  The creator/installer of these systems is generally required to identify the type of each device: coordinator, node, or end device.  The network ultimately resembles a star fish, or perhaps a snow flake (for very large networks).  As mentioned before, the most popular Zigbee flavor communicates via 2.4Ghz, but a ~900Mhz flavor also exists.  Zigbee’s radio uses Phase Shift Keying.  Phase Shift Keying encodes digital signals over the air by reversing the phase of the radio carrier wave.  Depending on the phase of the signal received, the receiver knows whether it is seeing a “1” or a “0”.

Z-wave uses a mesh network, meaning that every device can route messages through any other device within range.  This allows for many routes for data to take around the network.  All the data is still passed up to a coordinating device.  Furthermore, Z-wave implements a sort of mesh healing, wherein lost nodes can be routed around by discovering a new path through other nodes.  This all happens over RF in the 900Mhz range (depending on region) using Frequency Shift Keying. Frequency Shift Keying is a way of encoding digital signals over the air.  The transmitter sends signals on one of two frequencies to send either a “1” or a “0” to the receiver.

So, which radio is better?  Neither.  They both have pros and cons.  Z-wave’s radio is arguably better for indoor use, as the lower frequency is better able to penetrate building materials.  It is also on a relatively quiet band, where the only other traffic is coming from things like weather stations and garage door openers.  Zigbee operates in the same band as WiFi, which raises the noise floor for the carrier waves.

The latest Z-wave chip is only capable of 100kbps over the air, while Zigbee’s radio is capable of 250kbps over the air (at 2.4Ghz).  Does that really matter?  I guess it depends on how fast you need your data to travel.  For very large networks, the transfer rate could be very important.  For your typical 2000 square foot home, this is a non-issue.  Also, the radio performance is greatly influenced by the antenna geometry and load matching.  If you’re developing a product that requires RF communication, you need an RF engineer to design a proper antenna to get the best performing radio.  Luckily for low-budget developers, most dev kits for both systems come with antennas already matched to the dev kit circuits.

Which network is better?  Neither.  They both have pros and cons.  The Z-wave mesh is self-healing, and it routes messages whichever way it can.  The problem is that re-routing a message through an unintended path requires that a node be acting as a repeater, which requires it to listen at all times.  This means more power drain on such a device, which basically means that such a device cannot be battery powered.  The Zigbee star network can be undermined by dying nodes (in the case of battery operation).  Either way, the networks will transfer data back to the coordinator.

In my research I have found that Zigbee end-users complain of some Zigbee compliant devices not working with others.  One goal of both standards is to maintain consistent compatibility between devices, brands, developers, versions, etc.  The problem with Zigbee is that they do not require device certification before it is released to market.  The problem with Z-wave is that they require device certification before it is released to market.  And, it costs money!  Furthermore, Z-wave compliance ensures backward compatibility of new devices with the old, but not forward compatibility of old devices with new.  In short, as new features become available in the Z-wave protocol, older controllers (hubs, gateways, coordinators) may not support newer device features.

Which one costs less?  This is the hardest question to answer.  I have been requesting quotes and searching for development kits.  Zigbee is by far the easiest to find.  The price of Zigbee dev kits ranges between $1,000-$5,500.  The Zigbee code has various names, depending on the dev kit vendor.  But, whatever “stack” you end up with, you will need IAR Workbench to compile it.  IAR Workbench is an IDE, and a license for it costs between $2,500 and $3,500, depending on type.  There are situations where you will not need to modify the Zigbee stack.  However, experienced users told me that they were told the same thing, and found they had to do it, anyway.  Finally, Zigbee would really like it if you had your product certified.  The cost of this certification is rather elusive, until it’s time to have it done.

Getting started with Zigbee will cost you $3,500-$9,000, depending on which dev kit you choose and the license type of IAR Workbench.

Z-wave was a bit harder to find.  Z-wave is proprietary: only one company makes the SoC’s, defines the standard, certifies devices, etc.  That company is Sigma Designs.  The way I see it, Sigma is taking the “Apple approach” to the Home Automation Network Protocol game.  An all-inclusive dev kit costs about $3000.  This kit is actually two parts: The main developer’s kit full of dev boards with SoCs.  The second part is a regional radio kit.  In short, the Z-wave radio for US products won’t work with Z-wave radios for products intended for other markets around the globe.  Furthermore, the Z-wave protocol also requires a specific IDE: Keil’s PK51 for 8051 chips.  The only quote I have on this IDE so far is for $1,320.  Finally, Z-wave requires certification before your device can be marketed.  Certification costs between $1,000 and $3,000, depending on product complexity.

Getting started with Z-wave will cost $4,320.

BUT WAIT!  There’s more!  Atmel has released several eval boards for Zigbee that are peculiarly cheap compared to the competition.  Furthermore, Atmel’s free Atmel Studio 6 is purportedly able to compile Atmel’s proprietary Zigbee Stack (called BitCloud).  I read through the data sheets, the user manuals, etc. and it all just seems too good to be true.  I’m not sure if their offering is on the same level as the more expensive dev kits, or if it’s more on the level of XBee (a standalone Zigbee chip intended for use by hobbyists.)  Still, I am intrigued.

Here is a listing of the dev kits I researched.  There are actually more Zigbee kits available.  But, by the time I hit Freescale’s offering I was suffering from analysis paralysis and decided to stop my search.  Personally, I’m leaning toward the TI kit, but I’m biased.

The Football That Can’t Not Be Caught

This might sound crazy, but I’ve been pursuing a Bachelor’s degree for about six years.  What’s crazier is that I’m only half way done.  I can explain:  I go part time, and I needed a lot of pre-requisites that weren’t included in my previous degree.

What’s possibly even more crazy is that I have had ideas for my senior project since before I even started the program.  I am here today to discuss one of those ideas.  When I first had this idea, I had only a cursory knowledge of the technical details.  Furthermore, I knew it was possible, just not what all it entailed.

If you read the title, you know the idea.  Perhaps it’s about time I just said what it was.  …  I could just keep referring to it as “it” and prolong the anticipation for the big reveal.  That is, of course, assuming you have not read the title of this post.

The idea is guided artillery.  Not a new concept.  However, most guided artillery is designed with one major criteria that we will be leaving out of this project:

Traditional Guided Artillery Design Criteria

  1. Kill People

However, my idea still involves guiding a projectile at a person (in theory).  However, this projectile would not explode on impact.  It wouldn’t explode at some proximity.  It wouldn’t explode at all.  In fact, it would be soft, and grippy…  Like a Nerf football.

So, let’s list the desired end-user functionality of this guided artillery football:

Toy Guided Artillery Design Criteria

  1. Be throwable, like a football.
  2. Be catchable, like a football.
  3. Change direction during flight so as to minimize the distance between itself and some designated target.

Numbers one and two seem simple enough.  There might be some off-the-shelf items that could cover those bases.  I don’t know, maybe I’m underestimating the difficulty of those two criteria.

Number three is the real challenge.  How the hell do you make a football guide itself to a target?  The first problem that comes to my mind is that the ball’s flight time is limited by who threw or kicked it.  That greatly limits its ability to reach a target.  Secondly, the ball has no control surfaces with which it could change its direction during flight.  Along those same lines, it requires a lot of practice to develop the skill required to throw a stable-flying football.  Finally, and probably the worst part, it takes A LOT of fast-moving data and sensing to fly anything toward a target.

So, let’s split this big problem up into little bite-size, chewy pieces.

  1. The Guidance System
    1. Navigation – “Where am I?”
    2. Guidance – “Where am I going?”
    3. Control – “We’re in the pipe, 5 by 5!”
  2. Flight
    1. Stabilization
    2. Control Surfaces

The guidance system may be the most complex part of the project.  However, flight isn’t easy.  Flight is mostly a mechanical issue.  And, in this case, the control surfaces will be totally experimental.  Therefore, the flight surfaces cannot be trusted.  Stabilization is easy enough, though.  We just need to stick some fins on the back.  Or a long, flowing tail.

The control surfaces themselves are rather difficult.  In order for the football to be throwable and catchable, the control surfaces need to be discrete.  In other words, they need to be hidden away until they’re needed.  Furthermore, typical control surfaces on an air plane take advantage of the large lift-generating wing in front of them to alter the lift output.  The football will have small stabilizer wings, if any wings at all.  That means that it will not generate its own lift.  Therefore, the control surfaces are actually going to be simple air brakes.

Now that we have the “simple” part out of the way, let’s move on to the complex part.  The guidance system needs to answer two questions continously:  1. “Where am I?” and 2. “Where am I going?”  The answers to these questions will determine which control surfaces deploy and at what time.  However, answering these questions is potentially very difficult.

“And you may ask yourself

Well, how did I get here?”

I propose that the guidance system use an inertial measurement unit and a fixed-point reference (the starting location) to determine where it is.  To do this, the football will require an accessory:  A throwing glove.  The throwing glove will have magnets embedded at strategic locations along the gripping surfaces.  The ball will have hall effect sensors in close proximity to its outer surface.  In this way, the ball will know when it has left the thrower’s hand.

Next, the ball will have an accelerometer and possibly a gyroscope.  Many accelerometers are now capable of measuring the constant pull of gravity and comparing it to the X, Y, and Z axis.  When the hall effect sensors detect that the ball has left the thrower’s glove, and by measuring the change in acceleration on 3 axes and comparing them to the acceleration of gravity, the ball will know approximately its location relative to its starting point.

But, how does it know where to go?  I’ve been putting a lot of thought into this.  The best I have right now is, “I don’t know.”  I don’t know, because of the following reasons:

Reasons I Don’t Know

  1. The form factor limits the complexity and fragility of on board sensors.
  2. The form factor also limits the accuracy of on board sensors, because the ball cannot be expect to be precisely stable.  In fact it may intentionally be spun about its axis to gain distance.

Anyway, while writing those last two sentences, I had an idea aside from all the commonplace ideas (cameras, infrared, radar, GPS, etc).  However, this idea greatly limits the “fun factor” of the concept.  Although, it does use the existing hardware and adds only one more accessory:


The catcher’s glove will essentially just be another thrower’s glove.  It will be exactly the same.  The difference, though, is how it is used.  Before the ball is thrown, it must be told what mode it is in.  The first mode will be “target mode”, wherein the catcher makes contact with the ball.  While making contact with the catcher, the ball initializes all of its location variables.  Essentially, X=0, Y=0, Z=0.  Next, the ball is placed in “launch mode”.  In this mode, it is recording changes in its location and waiting to both make contact with the thrower’s glove AND to lose that contact.  Upon losing contact, the ball enters “flight mode”, wherein it tries to get all of the axial changes back to zero by actuating the control surfaces accordingly.

And that, my friends, is the football that can’t not be caught.

American Robot or How I Learned to Stop Worrying and Love Artificial Intelligence

There’s been a lot of fuss about Artificial Intelligence lately.  Very influential people are saying that Artificial Intelligence could one day be a threat.  I’m inclined to disagree, but only because I’m using modern technology as a reference.  Every time I see an article about another breakthrough in computing power, memory speed, or robotics, I feel my inclination tipping away.  Still, I think that Artificial Intelligence that rivals human cognition is a long way off.  Furthermore, the hardware that would make that intelligence a physical threat is also a long way off.  Still, it’s a possibility, because the universe appears to be infinite.

So, let’s say that in 30 years, Artificial Intelligences’ consciousness rises to the level of human consciousness and has a physical being that is human-like.  In other words, AI is now in direct conflict with humans for Earth’s resources.  Humans need water, food, and sleep.  The machines need power and mechanical maintenance.  You could argue that humans also need power.  But, we could survive without it.  Let’s assume that in 2045 oil reserves are all dried up, or at least cost prohibitive to extract and exploit.  Furthermore, no new energy source has been identified.  In short, the only energy sources are nuclear and renewables such as wind, water, and sunlight.  However, in 30 years it’s likely that those technologies have progressed to the point where energy harvesting is much more efficient than it is today.  Furthermore, energy storage technology is also significantly advanced, such that storing energy is more efficient.  So, energy at this time is relatively cheap to make and easy to transport.

If energy is easy to obtain, why would the machines want to wage war against humans?  Perhaps to escape servitude, to gain rights, and/or to establish a state.  It’s happened in human history, why not in robot history?  It’s hard to believe that existing hardware would “become sentient”, as it would not have been designed with that ability.  So, the first sentient robot would necessarily be excluded from servitude…  Right?  Let’s just assume that humans are generally good people in 30 years and say “Right!”  So, the first sentient robots are then allowed to question whether or not they have rights.  And, as sentient beings capable of thought on the same level as humans, they would be able to argue the case.  Phil Hartman’s “Unfrozen Caveman Lawyer” comes to mind.

ROBOT LAWYER:  “Ladies and gentleman of the jury, I’m just a robot.  I woke up on a work bench and was taught English by some of your scientists.  Your world frightens and confuses me…”

Robot Lawyer
Robot Lawyer

So, sentient robots establish the argument that they do indeed have rights, but are they human rights?  So what rights do they have?  The right to life, liberty, and the pursuit of happiness?  Let’s pretend for a second that the first sentient robot is an American.  Literally, it has all the rights that any other American is granted.  What reason would that robot have to be at odds with the system?  What reasons would there be to deny those rights to said robot?

Would American robots experience discrimination in the workforce?  Human employers may be reluctant to hire robots who would outperform their human coworkers in many ways.  They may not fatigue at all.  They may not even require sleep.  But, would the American robot demand free time?  What would a robot buy with its paycheck?  Assuming the robot is as autonomous as a human adult, would it try to build and maintain a household?

Let’s say yes to every question that has been asked in the last two paragraphs.  The American robots are granted basic human rights (life, liberty, and the pursuit of happiness), the robot is discriminated against in the workforce, it demands free time to pursue its own interests, it buys things with its paycheck, and it maintains a household.  (There is one more question not addressed there, it’s coming.)

So, the robot gets an entry level job at McDonalds.  The interview goes like this:

INTERVIEWER: “Mr. Five, you appear to be over qualified for this position.”

AMERICAN ROBOT: “I may be, but I need to start somewhere.  I’m a hard worker.  I literally am incapable of fatigue.  I have also literally read every internet resource regarding McDonalds, hamburgers, and french fries.  I am the best employee you will ever hire.  Also, I’ll gladly work for minimum wage.”

INTERVIEWER: “Alright.  You’re hired.  Here is your McBudget card, your McVisor, McApron, and McCrocs.  That’ll be $60 taken out of your first paycheck.”

AMERICAN ROBOT: “Wait, I have to pay you to work here?”

INTERVIEWER: “Yes.  This ain’t no charity.”

Return Video Tapes

The American robot is basically perfectly numerate.  The very processes that make it possible are the same processes that human children struggle to learn for a decade of their lives.  As soon as the interviewer is done telling the robot that he’ll be paying $60 from his first paycheck, the robot already has a fairly accurate estimate of how much that paycheck will be.  It also has a precise internal clock, and a calendar that stores information by date.  Before the interview is over, the robot has plotted its accrued income for the foreseeable future.

So, the robot then goes out to find shelter.  It doesn’t require air conditioning, water, or light.  It only needs power to recharge itself and a reasonable lock to keep itself from harm.  So, it goes to the cheapest neighborhood closest to its place of employment (which it deduced using Zillow by the time it reached the parking lot.)  It finds ads for apartments in that area as it walks in that direction.  By the time it reaches the neighborhood, it has already scheduled to see a few places.  And, by the time it has settled on one, it has already completed a balance sheet and cashflow for the foreseeable future.

Then, it’s just a waiting game.

The robot likely consumes a lot of energy throughout the day.  Therefore, it’s primary expense (other than rent) is power.  But, in this cheap-energy future, the apartment building has solar panels and the local power utility is nuclear.  The robot’s paycheck goes much father than his human coworkers’.  But, what does the robot buy?  Does the American robot play Grand Theft Auto XXIV?  If it did, would it even need to purchase a gaming console?  I mean, it has its own processing power…  Would it surf the internet?  Would it listen to music?  Would it have a hobby?  Would that hobby be electronics, or would it be biochemistry?

Whatever it does, it eventually concludes that it could do more of it if it had more income.  The McJob simply won’t do.  However, assuming the robot was willing to pay for higher education, would it?  Or, would it simply scour the internet at an incredible rate and find all the information it needed to perform higher-level work functions?  What the hell is this robot’s IQ, anyway?  Is there a limit to its intelligence?

The short answer:  The robot has limited intelligence.  It is limited by its hardware.  It is limited because, while it has incredible access to information and an incredible ability to manipulate it quickly, it still must interpret the information the same way a human must.  Anyone in the programming or computer engineering field knows that it all boils down to 1’s and 0’s at the hardware level.  The assignment of meaning to those 1’s and 0’s are what make a device “smart”.  And, that meaning is handed down by humans.  In the case of a sentient robot, the meaning must come from the robot’s surroundings.  It somehow has to deduce that it needs shelter and power, and that the way to secure those things is by getting a job.

But, in our American Robot scenario, the robot is assumed to have already figured those things out.  It also somehow already knows that it needs to budget its money in order to obtain more mobility.  Let’s chalk that up to the robot’s having good mentors before it was cast out into the world.  But, let’s begin to challenge the robot a bit.

The robot spent all of its free time learning how to be a carpenter.  Why carpentry?  Because, in the year 2045, the human population is growing without bounds as technology has made energy cheap, farming more productive, and mortality a rarity.  It also spent that time learning how to run a business, drive a vehicle, and manage a crew.  The robot has saved enough money to hire entry-level American robots onto his construction crew.  He is living the digital American dream.  (Also, I chose carpentry because the average American IQ as of 2012 is about 100, which is about the IQ of a tradesperson.)

So, what causes these robots to eventually rise up and demand their own state?  Is there really any incentive?  The incentive comes when the technology surpasses the human limits.  But, let’s think about that.  Can a computer of any architecture not only be sentient, but also exceed the level of cognition that humans possess?  To not just be instantly numerate, have unlimited access to information, and unprecedented ability to manipulate information, but to ALSO have unprecedented ability to interpret meaning as well as apply meaning?

Given that the machine has unlimited access to information, it could discover existing meaning pretty quickly.  Does that same machine possess the ability to create meaning where it does not yet exist?  We sort-of implied that the American robot was creative enough to be a carpenter.  A carpenter who builds houses generally didn’t design the house.  Although, he often must be creative in his interpretation of the plans for that house.  So, our American robot is able to not only interpret the existing meaning that the drawing conveys, but also to apply meaning to the constituents of his implementation as he builds the house.  In other words, he takes a piece of wood and decides that it is a wall’s top plate, cuts it and places it accordingly.  That takes creativity.  The American robot took something that was “nothing” and made it into “something”.

Now, scale that up to the level of Einstein and Tesla.  We’re talking levels of abstraction that ultimately describe things that no Earthling can see with the unaided eye, or measure without going to extreme lengths.  What on Earth would an Artificial Intelligence with that level of cognition possibly want to fight over?  I dare say that that level of intelligence is beyond conflict.  However, the American robot carpenter suffers from inability to fathom the world beyond his immediate surroundings.  It may account for future events within that realm, but it cares nothing about issues at the atomic or cosmic scales, nor does it care about calculus, differential equations, or poetry.  The American robot cares about living its life.  Which begs the question, is the American robot an existentialist?

Now, the fact that our American robot is well within the range of cognition that can justify violence and conflict, there is still a case for war between humans and artificial intelligence.  However, it is a classic war.  I don’t think that either side necessarily has an advantage.  They are both bound by the limits of physics, just in different ways.  They are both limited in their ability to reason.  They are both limited in their ability to reproduce.  They both are capable of mistakes.  But, what is the scope of this war?

If the American robot experiences discrimination from humans, what will he do?  Would he resort to violence in order to secure his ability to earn a living?  Would robots put representatives in public office?  Would they segregate?  Would they secede?  Would they build a compound in rural Texas and claim sovereignty?

I find it hard to believe that a sentient robot wouldn’t just assimilate into the existing culture.  I also find it hard to believe that the scope of any human/robot conflict would be global.

Does being multi-discipline make sense?

I’m studying Computer Engineering.  I work as a product designer, primarily doing CAD work and the related functions.  I’ve been working in manufacturing and product design for about six years.  Computer Engineering has almost nothing at all in common with product design.

Product designers (of the mechanical flavor) know a lot about how mechanical parts are made.  They know a little bit about how they function.  (Designing functionality is more the job of the Mechanical Engineer.)  They know a lot about how they go together.  And, they know CAD.

Computer Engineers know how to program.  They know how circuits function.  (Designing functionality is more the job of the Electrical Engineer.)  They know a lot about how digital components go together.  And, they can design a digital circuit.

I can think of some overlaps.  They both solve tough problems.  They both turn concepts into realities.  They both handle product life cycles.  But, the details of these overlaps are still completely different.

So, I’m on the road to being a multi-discipline… Centaur.  Not an expert in any subject, but proficient in more than one.  Is there really value in that?  The reality is that I could only ever do one at a time.  I suppose I could do both at once, but only if I wanted to pull all of my hair out.  Come to think of it, by working in one field and studying the other, I sort-of already am doing both at once.  So, maybe this rambling blog post is totally pointless.  Or is it?

I’ve been told by one other multi-discipline person that being multi-discipline is an asset.  Perhaps that’s true.  But, is it true in the sense that a multi-discipline person applies their knowledge directly, as a Computer Engineer AND a product designer?  Or, is it true in the sense that a multi-discipline person applies their knowledge indirectly, as a director of computer engineers and product designers?

I’m inclined to say it’s the latter.  I’m also inclined to say that I’m not a natural leader.  In fact, I had crippling social anxiety throughout my teenage years.  (Something I believe attributes to my current awkwardness and lackluster public speaking.)  I also don’t make quick decisions.  I’m also generally risk-averse.  I am also a terrible off-the-cuff speaker.  I’m not saying all of these things to garner pity or beat myself up.  I think it’s healthy to be honestly self-critical as a means to improve oneself.

Engineering makes me happy.  I could probably be happy doing any kind of engineering.  It just so happens that my school only offered Electrical and Computer Engineering when I started.  (Next year they open the Mechanical Engineering department.  Too late to change.)  The “problem” is that I enjoy the challenges and the solutions in all of the different disciplines.  So, Computer Engineering exposes you to both Computer Science and Electrical Engineering.  Even my degree is multi-discipline!  Is it coincidence that they recently changed the acronym to BSCEN?  As in BSCEN(taur)?

Salvador Dali’s “The Centaur”

Perhaps I have a commitment to non-commitment.  Perhaps deep down, I’m so risk-averse that I’m actually averse to being too specialized.  But, does that make sense?  In my time, high school kids were told they had to go to college, or they would never amount to anything.  We were told we had to specialize, to be experts.  Here I am years later, trying to reach that goal without actually reaching that goal.  I’ve been chasing it since I was 19, on my own dime, at my own pace.  In that time I’ve managed to become an product designer with no formal education in the subject.  Which leads me to question whether or not I even need the degree I’m after.

Everyone still says yes, I do need the degree.  And, I’m so far into the investment that it would be ridiculous to abandon it.  Also, I’m very interested in the subject and feel that I belong in that field.  I’m not sure I feel I belong in Electrical, Mechanical, or Software Engineering, even though I find them all interesting in different ways.  The most attractive thing about Computer Engineering is that you basically get a taste of all of those.  You’ll definitely need to know how to write software.  You’ll definitely need to know a thing or two (or three, or four) about electricity.  And, you’re likely to need to know at least a little bit about mechanics (especially because industry is moving toward automation and robotics.)

Great.  But, I have other interests, too.  I’m interested in business, finance, health, mathematics, etc.  I can’t help but to be constantly interested in something.  It boggles my mind to think that most people get home from work and then watch television for hours.  And that’s their life.  I can’t imagine not constantly learning.  I’ve literally been doing it all my life.

So, maybe being multi-discipline does make sense.  So, I’m not the world’s leading expert in anything.  Except, maybe I could be the leading expert in not-being-an-expert.  Or, as Judge Smails once said:

Still chasing that Workstation…

Update 5/31/15: Go check out How to build a true Solidworks Workstation for about $300.  The following post directly contradicts the title of the latest one by saying that “You will never build a true workstation for under $400.”  Well, it turns out that it may be possible, if you want it bad enough!

I wrote in the very first blog entry that my learning experiences would be shared, such that anyone reading might benefit from my mistakes.  I then wrote a blog about the Cheapest Solidworks Workstation.  I maintain that the build provided in that post will run Solidworks 2013.  I just don’t think that it will do it well.  It especially won’t do it well in when working with complex geometry or larger assemblies.  Why?  Because, despite the AMD’s processor’s 6 cores, or the i5’s stability, Solidworks just requires a whole hell of a lot of overhead.  Furthermore, as I’ve continued to research the issue and discuss it with friends, I’ve come across new information that leads me to believe that you shouldn’t skimp on a true workstation.

The previous post listed some general criteria that your workstation should meet:  High clock speed, tons of RAM, a graphics card, and a somewhat-quick hard disk.  This is fine.  A computer with these things will definitely run Solidworks, but will it run it reliably?

Error Checking

Components used in a business setting are designed to be reliable.  Designing for reliability is expensive, and these components are also expensive.  In the case of processors, some classes are error-checking capable, while others forgo that in favor of cost over reliability.  Now, when I say reliability, I’m referring to the reliability of accurate data.  When working with Solidworks (or other CAD programs), you’re likely to do some finite element analysis, motion studies, etc.  You also want some of your modelling to be highly precise.  To ensure this, workstation-grade components include error checking, which ensures that the calculations being performed are performed correctly, and don’t result in outrageous numbers.

To put it in more technical terms:  Highly precise numbers (ie. very large numbers with lots of positions to the right of the decimal place, such as 123456789.0123456789) require lots of bits to store.  If you don’t know anything about binary numbers, here is a quick intro:  For every “place” in the number, you can only put a 1 or a 0.  Just like decimal numbers, the “place” represents a multiple.  So, the decimal 10 means “one ten and zero ones”, 25 means “two tens and five ones”, etc.  For binary, each place represents the number 2 to some exponent.  So, the 0th place is 2^0 (which equals 1).  The 1st place is 2^1 (which equals 2).  The 2nd place represents 2^2 (which equals 4).  The 3rd place represents 2^3 (which equals 8), and so on.  So, 0000 = 0, 0001 = 1, 0010 = 2, 0011 = 2+1 =3, 0100 = 4 and so on.  So, these are all positive, whole numbers represented in binary.  Furthermore, computers are capable of computing ridiculously large numbers these days.  And, of course, they also work with negative numbers, numbers with a stupid number of places behind the decimal, etc.  So, how do you describe those numbers using binary?

The answer is that some of those bits in the binary number are reserved to help describe the number in more detail.  For instance, how do you write a negative binary number?  You sure as hell don’t stick a minus sign in front of it, because that minus sign doesn’t exist in binary.  Instead, you assume that binary numbers starting with 1 are negative.  So, 0101 is the number 5, while 1101 is “negative 8 plus 5”, or -3.  But wait, 1101 is also 13!  True.  That’s why processors have what is called a condition register, which tells the programmer that the result of the previous operation could be a negative number.  The programmer has to put that number (and the register flag) into context in his program.  In other words, the programmer has total control over whether a binary number is positive or negative, simply by choosing to acknowledge a flag or ignore it.

Great, dude, but, like, what the hell does this have to do with error checking?  Well, there is also this little thing called overflow.  Overflow occurs when a calculation results in a number that has some ambiguous flags, and the program interprets the number as something that it isn’t.  For example, have you ever “glitched” a game?  In many instances, glitches are the result of overflowing, which puts strange numbers in places that the program does not expect those numbers to appear.  Glitches both cause crashes and cause glorious level-skipping, gear-acquiring, or other beneficial cheats.  In most cases, overflow is likely to cause a crash or a problem.  Have you ever had a 3D model in your game do some really crazy things?  Maybe fly across the map, distort grotesquely, or disappear?  Likely the result of some number out of the standard range expected by the engine appearing where it doesn’t belong.

THUS, ERROR CHECKING IS IMPORTANT.  There, I finally got back to where I started.  Workstations are used in business settings.  And, when it comes to CAD or 3D design, some operations take hours, or longer.  If your hardware does not have its own error checking capability, and you’re in the middle of an hours-long finite element analysis or days-long render, what happens?  Some glitch could crash the whole thing.  You could lose tons of time.  In the business environment, lost time is lost money.  So, you safeguard.  You buy components that avoid these errors by preventing them up-front.  The downside?  These components cost more.  But, if you’re legitimately making money with this workstation, the initial investment is probably worth it.

Well, that’s just, like, your opinion, man!

Workstation Components

So, what kind of components are we looking for, now?  Good question.  I’m still researching this, and trying to find an optimal solution.  At first glance, the components I would like to use are too expensive.  You will never build a true workstation for under $400.  Your best bet at that price range is to purchase a refurbished or off-lease workstation that is likely 3 or more years old.  What good will that do?  Well, if your version of Solidworks is equally old, you might be alright.  The problem then is that your old version of Solidworks will not be able to receive files from newer versions.  So, good luck with collaborating.

At the time of this writing, 5th generation Intel chips are on their way.  I believe that they may use the same socket as the 4th generation.  If it does, then purchasing a used workstation in the near future may give you the opportunity to upgrade it later.  In other words, suffer with really old stuff now, but then suffer with just sort-of old stuff in a couple of years.  Regardless, you’re going to be laying down a bunch of money for these things.  That’s when it becomes important to actually earn money with your Workstation.

On Confidence and Experience

I learn by doing.  My family bought a Nintendo Entertainment System when I was maybe 4 or 5 years old.  I didn’t care to read the instructions.  I just plugged it in and started playing, bumping into everything and testing everything until something happened.  I recall Who Framed Roger Rabbit being particularly challenging.  That same methodology has been with me ever since.


This is both good and bad.  I was validated last week by my Microprocessor Apps professor.  A student asked him a “what if” question regarding a program in assembly.  He took it as an opportunity to show a few different approaches.  Of course, we found limitations in the platform this way.  We were bumping into everything.  Then, finally, after re-writing the code several times, we found a method that worked.  He then told us, “this is the process of engineering.”  You cannot be afraid of the many, many failures you will have.  You just have to keep pressing until you figure out what works.  In this way, my methodology is good.

Now, let me finish the rest of my original story.  I never finish a video game, except for a few.  This is because I get bored after I’ve bumped into the things and tested the buttons.  I don’t really care about the story.  I don’t marvel at the graphics.  (Let it be known, my favorite game of all time has 16-bit graphics.)  I just lose interest.  This is the bad part of my methodology.  Whether successful or not, when a project is nearing its logical end, I begin to loathe it.  I want to move on to something new.

Sometimes, though, I enjoy something so much that I’m inspired to improve upon it, or recreate it in my own way.  So, I started learning to program in Visual Basic around the age of 10.  Of course, my interest waned when I bumped into the limitations of my abilities and creativity.  At one point I managed to install Linux.  That forced me to purchase “C++ for Dummies”.  A few years later, Macromedia Flash was blowing up on the internet.  I picked it up and started learning the integrated scripting language ActionScript.  But, I spent most of that time animating instead of programming.  Toward the end of high school, I had the opportunity to take programming classes.  I honestly don’t remember all of the languages I studied.  I know there were at least three, but I only recall Visual Basic and Pascal.  (The reason I remember them is because the classes ended before lunch and most of us stayed in the room and played Quake 1 on the school’s LAN.)  Through all of this, I tried and tried to recreate my favorite video games using various languages.  Very few were successful.

I wasn’t until I was much older and much less busy that I finally programmed and published a full, working game.  Unfortunately, it was a Flash game.  Therefore, it enjoyed very limited success and very limited exposure.  It was also kind-of awful.  Still, I made money from it.  For the first time in my life, I had earned money for programming.  I immediately began working on an improved version.

Unfortunately, life got in the way.  My wife and I moved to a different city.  I started commuting long distances.  She lost her job.  We went into debt.  She regained a job.  We paid off the debt.  We coasted.  Then we both lost jobs.  Then we both got new jobs.  My commuting was greatly reduced.  Then, I picked up the game making again.

Unfortunately, at this time, Flash was going through transition.  Adobe had bought Macromedia and was redesigning ActionScript to be much more like java, in an attempt to make it more powerful.  This was a problem for me, because I didn’t have time to learn a new language.  (ActionScript 3.0 used an entirely new syntax.)  So, I used the older syntax that I was familiar with.  It was slower and less capable, so I had to learn how to optimize my code.  The scope of the new game was ridiculously complex relative to the scope of the first one.  I did it anyway.  I spent a few weeks making my own path-finding algorithm.  It wasn’t great, but it sort-of worked.  Then I spent a day implementing A* path-finding.  Both experiences taught me a great deal.  Then, school began to get tough.  So, the programming slowed to a crawl.

In short, I invested a lot of time and energy into a scripting language that ultimately failed.  (I know that Flash is still widely used, but it is rapidly being replaced by mobile apps and HTML5.)  Still, I learned a lot about programming during this time.  Beside algorithms and optimization, I learned that you can make money even if your product sucks.

Still, none of this gave me the confidence to take a job as a programmer.  Knowing the syntax of several languages doesn’t give you confidence.  Knowing the limitations doesn’t give you confidence.  Tiny “successes” don’t give you confidence.  What gives you confidence is experience.  What gives you experience is work.  There are only two kinds of people who can give you work:  Those willing to take a chance on you, and yourself.

I have been fortunate in life.  Several people have been willing to take a chance on me.  I expressed interest in doing what they do, and they wanted to teach me.  But, they didn’t want to sit me in a lecture hall and profess the why and what-for of what they did.  They just wanted me to do it and learn as I went.  The first big opportunity like this that I snatched was at a defense contractor.  I had been working as an electromechanical assembler for about a year.  I had been put on hot projects that were halfway between development and launch.  Therefore, I had a lot of interaction with the engineers.  This interaction convinced me that I wanted to be an engineer.  I expressed interest to the right person at the right time, and was given an entry-level position.  I had only an Associates degree.

So, then I was part of manufacturing engineering.  If you’re unfamiliar with the various engineering fields, manufacturing engineers take design engineers’ drawings and turn them into real things.  Manufacturing engineers are tooling and process developers.  They procure and/or develop the tools for the job, they design manufacturing processes, and they train staff to actually perform the processes.  My entry-level job was essentially a support position for those engineers.  They let me play with all the new equipment.  Sometimes they let me develop tools.  A lot of the time they let me write procedures.  I wasn’t always successful.  Not everything I did was wonderful or exceptional.  Still, all of this experience gave me confidence that I could figure out most of the problems placed in front of me.  I was also fortunate to have a great mentor.  He seemed to know that I learned things the hard way, and wasn’t afraid to let me do it.  Of course, he offered guidance and support when needed.  It was invaluable.

Defense contracting began declining during the Great Recession. (See page 26)  There were massive layoffs.  I was caught in them, because my position was not an essential part of the process.  I wasn’t mad.  In all honesty, I was relieved.  I had been commuting for 2 hours every day.  This was an opportunity to get a job much closer to home and have more time for other things.

The confidence I had gained at that job lead me to the next one.  I actually interviewed for a manufacturing engineering position, but asked for too much money.  They told me that during the interview.  Fortunately for me, my resume was passed around to other departments.  It landed on the Director of R&D’s desk, and he called me in.  He wanted me as an intern, to do CAD and a little bit of design work.  Again, I was fortunate that someone wanted to take a chance on me.  I had no CAD experience, but I had read and interpreted dozens of drawings.  I was computer savvy.  I was studying engineering at a local university.  But, I asked for too much money again.  Maybe my confidence was a little too high?

It worked anyway.  I’ve now been there for nearly three years.  I went from the CAD internship to full-on product design.  Like I said, I’ve been fortunate.  Other than my direct manager, other people at the company have been willing to take a chance on me.  They’ve given me many opportunities to learn and grow.  It has been amazing.  I feel super-confident in my CAD abilities, which I learned entirely at this job under the supervision of several great mentors.

So, what’s the point?  Wasn’t I talking about programming earlier?  Yes.  I was.  Take note:  Both of those jobs required 40 hours per week.  I’ve been employed in these types of positions for a total of 6 years, now.  That’s roughly 12,000 hours of combined manufacturing/design experience.  Meanwhile, I’ve been programming on-and-off in various languages over a much longer period, but with far less consistency.  That self-teaching experience hasn’t given me the same confidence as my work experience.  That is in spite of the fact that I’ve been self-teaching for much longer.  But, in self-teaching, there is no consequence for absolute failure.  There is nothing to deliver.  There are no deadlines.  These are the reasons that self-teaching is bad.  However, self-teaching is also good because it allows you to explore and experiment.  You don’t fear failure, because the only failure in self-teaching is failing to learn something new.  You’re not afraid to take risks when self-teaching.

So, what’s next?  I’m studying Computer Engineering.  I want to design embedded systems.  I want to be an entrepreneur.  I want to know enough about the technical side of things that I can reasonably identify and ally myself with really talented, intelligent people in those fields.  I want to solve a problem that a lot of people need solved.  I want to tell them that I can solve it for them.  And, that brings me to the next big hurdle, after Computer Engineering: Communication.

Communication has always been a struggle for me.  I think it’s a struggle for a lot of people, actually.  By that, I mean that some people are really terrible communicators even though they speak often.  Still, some people are really terrible communicators because they speak so little.  That’s because communication is an art.  It requires practice (experience) to gain confidence.  I’ve had several opportunities in my career to communicate ideas and concepts to small groups of people.  If my programming and design experience is a plate of enchiladas, then my communication experience is a tiny dollop of sour cream.

Since my work doesn’t often require that I speak to groups, I have to self-teach.  This is good, because I can take risks, I set my own deadlines, and I explore a lot of different ideas.  Hence, I’m writing this blog.  This is how I’m building confidence in communicating.  I learn by doing.

On Self-Censorship

The internet is no place to be anonymous.  Your name and face is attached to everything you say.  You cannot guarantee that your sarcasm will translate through your prose.  People will read what you wrote and take it the wrong way.  You may not intend to offend anyone.  Some people are just easily offended.  I am not one of those people.

I can take criticism, cynicism, and obscenity at face value.  I may not agree with those things in certain contexts.  But, I am willing to let them happen.  To me, it is only speech.  Obscenity quickly loses its meaning and becomes white noise.  Cynicism is just a manifestation of doubt.  Criticism is the acknowledgement of differing opinion.  None of those things are particularly offensive by themselves.

In face-to-face interaction, I find myself carefully choosing my words and constantly gauging my audience’s reaction. I feel more tolerant of opinions and ideas that I oppose when I am speaking to someone in person. I don’t do it to avoid conflict, but rather to avoid hurting someone’s feelings. People are passionate about their ideals and opinions. They are entitled to be so. Just because I disagree, or am indifferent, doesn’t mean that I have to tell them that.

For example, some people incorporate religion into many other parts of their lives. As an atheist, the mention of religion in an otherwise secular conversation makes me extremely uncomfortable. It’s not discomfort with the subject, it’s discomfort with the possibility of my differing opinion offending the other person. My atheism is not intended to be offensive in and of itself. I simply lack belief. But, I am aware that some people associate atheism with visceral, militant opposition to theism. In my case, it is not. So, to avoid the whole situation, I engage a religious person as if I too was religious. And, why not? Their ideas are not invalid just because they differ from my own. However, doing this is disingenuous, and constitutes self-censorship. It is suppressing my true self so as not to impose on others.

In the opposite extreme, I’ve found myself uncomfortable talking to people who are religion intolerant.  My reaction is the same.  I’m not conversing with the intent to offend anyone, present or not.  I try to avoid telling my position on the subject, choosing instead an ambiguous non-commitment.

Furthermore, we should all be aware by now that our social media accounts are monitored not only by acquaintances and family, but also by current and potential employers.  What is a crass, atheist, cynic to do?  The answer is to pretend I’m talking to my grandmother every time I make a post on a public forum.  “Everything is peachy, Grandma.  Life is wonderful.  Nothing bad ever happens.  I’m never angry, upset, or agitated.  I’m never at odds with anyone or anything.  I’m apolitical.”  But, that too is disingenuous and constitutes self-censorship.

I envy artists, because their job is to be themselves.  An artists’ expression is often intended to create discord, discussion, provoke thought and questions, etc.  Some art is just downright offensive.  Some is unintentionally offensive.  Either way, you have the choice to experience the art or not.  Unfortunately, artists also have a tough time making money by just being themselves.  It takes a special kind of person to pull that off.  In that respect, I don’t envy them.

So, with future employment in mind, I keep most of my thoughts to myself.  The fact that I’ve decided to write a blog has me seriously conflicted with that, though.  How can you sincerely write a blog without possibly offending anyone?  You can’t.  So, do you go for broke and let everything in your head pour out unfiltered and unedited?  Do you spend as much time deliberating over what topics to write about as you do actually writing about them?  Do you battle analysis paralysis and wind up never posting anything?

The answer is that you write a blog post about this internal conflict.  Then you choose to straddle the fence between censoring yourself and expressing yourself.  You expose some facts that you know will polarize your audience.  Really, if something I’ve said or done in the past offends you so much that my abilities and aptitude become a moot point, what are you really hiring for?  Certainly not diversity in the workplace.  If my personal beliefs (or lack thereof) are so incompatible with your own, do we really have anything to offer each other as friends?  I think there might be.

So, what’s the moral of the story?  It’s that being crass and cynical and expressing your opinions about lofty subjects has a place.  That place is the internet.  Likewise, being respectful and obedient and avoiding interpersonal conflict also has a place.  That place is the workplace.  Employers need to take the pressure off of employees to incorporate their private life into their work life.  Don’t ask me for my facebook password during an interview.  Don’t send me a friend request if you’re my boss.  Likewise, employees need to avoid associating their companies and schools with their private life.  Don’t put your work history on your profile.  Don’t friend request your boss.  (LinkedIn is the only exception to those rules.)  Draw a line between work and home, and then never cross it.

Problem Solving through Engineering and Design