PSoC Design Methods and Madness Blog - Cypress.com: Blog Posts http://www.cypress.com/?id=2394 How to solve the governments' budget crises, and design great code http://www.cypress.com/?rID=49449 This WILL get to an embedded design topic, but permit me a few paragraphs to get to the point.

If you have been paying attention to the news, our national government and those of nearly every state is in budget crisis. Here are just a few glimpses of the crisis:

"More than 40 states are projecting billions of dollars in budget shortfalls for fiscal 2012.

"No formal budget was ever adopted for the current fiscal year, which began on Oct. 1, 2010." 

"Facing another fast-approaching deadline to avert a government shutdown" 

None of this should be a surprise, especially if you've lived through more than one recession. When the economy is good, individuals and governments as well become much freer with their(?) money. Then the economy turns, the revenue (incoming taxes) slows down and voila! budget crisis. You and I know what we have to do when we have a personal budget crisis: spend less, get more efficient or borrow money from someone willing to lend (much easier before 2008).

So which of these does government typically choose? First borrow more (not always possible for some states due to their constitutions that require a balanced budget - they can only spend out as much as they are taking in) and if/when required, spend less. But if you and I were to spend less like the government does, we would start by cutting back on milk and bread while maintaining a steady-stream on-demand pay movies on hi-def digital cable.Why?  Misplaced incentives.

"Human beings adjust their behavior based on the metrics by which they are evaluated." (a quote from HBR article by Dan Ariely: "You Are What You Measure." , NOTE, I have had trouble getting this site to load, but the link is correct). This means if you give someone an incentive, they will take it, and every measurement "scheme" you have is an incentive of some sort, though most not as you intended. Let's look at the well-known government budgeting practice called "Use it or Lose it". This technique encourages a program or department facing a budget surplus late in the fiscal 4th quarter (August) to spend every penny as fast as possible. 

Actually, there are many, many organizations who can help you get a piece of this action, like TargetGov.com with their helpful tips like: "End of the Fiscal Year, Big Government Spending". The actual tagline for this site is "Helping you sell what government buys". Here is the intro to the referenced article: "We are now entering the fourth quarter of the federal fiscal year.  The federal government spends nearly 60 percent of the year’s budget at this time because of “use it or lose it” requirements." It goes on to provide tips to get "your share" of this spending free-for-all. I cannot vouch for that statistic personally, but if true is shocking. No wonder it's hard to make government better, if that is accurate. Of course, it must be, I found it on the internet :-) 

So when it comes to a budget crisis, and planners are facing a large shortfall, if you and your efficient department are showing year after year more results with less proportional spending, guess what happens when the money is short - the team that is NOT bursting their budget certainly doesn't need more when compared to a program that year after year can't seem to even get the same results with more money. Repeat this year after year and you can be sure to find over-budgeting (to be able to handle necessary cuts one day) and frivolous depletion of surpluses.

So how to fix the budget crisis (crises)? Incentives, but ones that reward efficiency, backed up by actions. It is obvious the "use it or lose it" budget incentives are NOT doing anything for efficiency.

OK, now that's settled, let's see how the same logic applies to designing embedded systems. Well, just like government budgets, you can easily incentivize the wrong behavior in your project. For instance, think what happens when you measure the quality of a software product simply by the number of defects reported against it. Under such a system, what would you expect a developer is to do when he/she finds a minor bug: report it or perhaps spend time trying to figure out how to ignore it with a clear conscience. "It might just be a corner case", or "perhaps no one else will find it" or "maybe the root cause is someone else's code". Besides, accurately logging a bug and detailing the steps to reproduce it takes real time to do right, and in the end it only adds a black mark to your product.

So if you have a "defect" crisis in your project, look at the incentives and the forecasted results, the "gaming the system" results like the example shown in this article on the "Defect Black Market" that developed in one project. But look also at the less obvious incentives, starting with the behaviors you find common that are not what you want (you may even find a "friendly" on the inside to provide clues).

Or like the government, just take the list of defects (or other measure) and just cut (or dictate a cut of) 33% off the top. Crisis fixed.

Jon Pearson
Methods and Madness, March 14, 2011
Happy 17th Birthday Linux (version 1.0.0 was released, with 176,250 lines of code on 14 March 1994. Go Linus and the open sourcers!) 

]]>
Mon, 25 Nov 2013 00:09:55 -0600
Ricky Gervais on Embedded Design http://www.cypress.com/?rID=49715 Really? Ya.

Ricky Gervais was interviewed in the Harvard Business Review (Ricky Gervais on Not Having a Real Job), if you subscribe to HBR you can read it, but anyone can listen to the 12 minute podcast. 

If your knowledge of this comedian is from the original The Office BBC TV series or his hosting episodes on the Golden Globesawards, the man on this podcast is NOT that man. And while Ricky did not specifically address what to do on your next embedded design, he did have some gems which can be applied to your project.

1) "Ask yourself 'Why am I doing this? What's the best that can happen?'" also ask "What's the worst that can happen?" Always critically examine what you are doing and why. What was a "good" idea at the start of the project or the start of the week may be a waste later, based upon the learning since.

2) "What matters is the work you've done" Take pride in your work. Don't be afraid to be recognized for it. Kinda goes without saying, though. Still.

3) "Write about what you know" was Ricky's response when asked why he did "The Office", but he also meant "write" about what other people know, as in everyone gets the office setting and situations. So on a project if your code and comments and design explanations aren't being understood, you missed your mark. Rewrite them for others, not for yourself. Especially if you don't wish to be fixing it for years.

4) "Be fair and upfront and you can't go wrong" Keep it real on the project, if you bring up a "problem" make sure you are talking about the "real" problem. If the "refresh rate" problem is more about you wanting to do the filtering design, be honest with the team.

5) "If everybody likes something no one will love it" Love comes with hate, like means it's watered down. When something is average it doesn't generate strong emotions, when something is great, it will also have its critics. But depending on the project, good and reliable might be exactly what is desired. Then again, the iPhone and iPad are not average, people love them and some do hate them.

6) "One veto and it's out" - Anyone on the team doesn't like something, it's dropped. What's left is everything great, but, of course lots of good ideas are rejected to keep the great. This again is how you rise above ordinary, and if your product is not required to be just good and reliable, you will need to reject some good ideas.

7) "Ya, probably not, though" Keep the language precise when you're discussing the project. This response by Ricky' to a question was very interesting, might lead you to wonder the next time he answers "Ya" is he really finished or if you wait long enough will he get to the point like "can't pay your salary this week".

]]>
Fri, 25 Jan 2013 00:00:49 -0600
A bicycle for your mind to ride to the forest Bach concert http://www.cypress.com/?rID=50127 Everyone likes "new" and Apple is currently the master of the new that more and more people want, all over the world. And some only see just that, the compulsion to always buy the new product that Apple rolls out each year. I find myself yearning for these new products but tempering my purchases with my Scandinavian immigrant, depression-era-parents-bred values. But I do have an iPhone4, and a Macbook (but it's almost 4 years old now).


in the following 2 minute clip, Stephen Colbert eschews the depressing stories of the day (after reminding us of it all) to gush over his new iPad2, which to him is better than his original iPad, because it's thin, and has a camera (which he demonstrates with his mug). But even so, he is now wishing for iPad3.




A poke at us consumers, but Apple and Steve Jobs are not just about getting us to buy something that is new, but bringing the new/best "tech" to the non-tech masses in a way that it really can become part of their life. The following clip of Steve Jobs from decades back shows him talking about making the computer a bicycle for the mind. The logic is fascinating (not the the source code, the thinking).


And some still point to many of these bicycles adopters as just gullible consumers with more money than sense. When to make a point, the next clip is the epitome of gullible. (And yes, everyone in the clip seems to have an iPhone).


But the "bicycle for the mind" metaphor is much stronger than a simple April Fools gag (although the computers we now carry in our pockets can execute some awesome pranks). Bicycles are not hard to understand and use, but they take a little training. The mind must learn the balance and how to use the two spinning flywheels. And it isn't hard, but once mastered you can go way beyond pedaling and coasting. The next clip is one guy's demonstration, and if his bicycle was a computer he'd probably be cranking out iPhone apps.



Apple is praised not just for design but for function as well. What happens when design trumps function? Often great beauty, as demonstrated in the next clip.


But just as often questionable functionality. The phone shown below (go here for more pictures and info: mocoloco.com/archives/022683.php)



Like so often displayed by Bang and Olufsen in the past, the design can trump the performance and usefulness, but the simplicity of iPhone/iPod/iPad seems to have propelled B&O to create Beosound8, a beautiful presence and expansive sound even from the compressed digital feeds we all consume today.



And at $999 from Amazon it's not cheap, but most reviewers who took time to write agree that to them it was worth it, although one reviewer complained about the paper speaker cones, not what he expected for the price. (See for yourself if you want: www.amazon.com/Bang-Olufsen-BeoSound-8-Black/dp/B004BGTK14/ref=cm_cr_pr_product_top


So, what is my point? The thing we do is more than simply what we make, write, or say. It is how it affects the world and those around us. Be sure to ask yourself often "Why am I doing this?" and if you can't answer that question well, better start doing it different. Let your mind ride a bicycle.




Jon Pearson


Methods and Madness, April 9, 2011, the 99th day of the year

44 years ago, the maiden flight of the Boeing 737

]]>
Mon, 28 May 2012 00:23:55 -0600
Never "get" a real job, make one instead http://www.cypress.com/?rID=47437 Author Scott Gerber has written a book Never get a "Real" Job and you can see his presentation (more of an advert) here: (portal.sliderocket.com/Scott-Gerber/Never-Get-a-Real-Job). While his advice is for new college grads on how to become entrepreneurs, what about those of us who already have jobs? Besides jumping ship, can we learn anything from him? You bet your life!

Scott Gerber is a "Gen Y-er" and writes to Gen Y-ers, a group suffering from high unemployment and low job satisfaction. His message is "you have been lied to, getting a "real" job is a dead-end; learn to hire interns not how to be one". What is most interesting is that this message has cycled and re-cycled over-and over for each "generation" of youth entering the job market. Last catchy presentation of this I heard was "become you.com" at the turn of the millenium. It is as true today as it was when I finished school at the trailing edge of the Baby Boom when I heard a similar mantra: "Make the job you want".

The reality is, we need entrepreneurial thinkers in the work force, in our companies, on our team. We need people who can look at what's on their plate and if they see lemons make lemonade, or open a lemon-exporting business or write an advertisement like "Lemons are the new orange". Every job has crap in it, but if you handle it properly it can become fertilizer.

There is one more key advantage to employing entrepreneurial people - they see opportunities everywhere. This is a huge benefit in one particular aspect - when an employee feels trapped in a job his/her performance declines, and poor performance reduces one's perception of opportunities, which increases the feeling of being trapped, which further reduces performance, and on and on. The feeling of choice or opportunity makes what one does today seem much more of a choice and when you choose something you feel better about it. 

Of course, a strong entrepreneurial spirit can also catapult someone from your team/company out on their own. But if you really care about them as a team member and person, and they are jazzed to go out on their own, you should be happy for them, even if it leaves a hole. The better that relationship the more likely the budding entrepreneur will help you fill the gap, either helping out personally or in recruiting/training a replacement. And many start-ups are built upon experience gained in previous jobs in the industry, so there is a great chance that one employee loss could turn into several millions in new business in the near future.

So one last question: what is a "real" job anyway? I'm not really sure I've ever had one.

]]>
Fri, 09 Dec 2011 03:54:58 -0600
Listen to your customers - sometimes they are right http://www.cypress.com/?rID=47004 Apple Inc. is a company famous for saying that they don't ask their customers what they want (at least Mr. Jobs has said so). Their theory is that if you constantly deliver something the customer has never had before you cannot ask them if they want it. While this seems to work for Apple, I would argue that they don't strictly ignore their customers, since doing so would eventually make them go away, but they instead prevent the feedback from customers to strictly drive new products. But Apple does obsessively deliver a fantastic experience for their customers. And Apple is not afraid to admit (to themselves at least, shown by their actions if not verbally) when they make a mistake. For an example, check out the latest version of the iPod Shuffle, and compare it to the version it replaced and the one before it.

The ugly truth is that we must talk to and listen to our customers, but the customer is only right when they actually vote with their wallets - and on that note one must agree that Apple is delivering. Listen aggressively, and then decide what the most important response needs to be.

Recently I re-visited some customers I had been with only 6 months earlier and found that I was the only one saying something different - the product I presented 6 months ago changed drastically, but, surprise, what those customers were saying did not. I didn't listen the first time, but this time what they said aligned with the product (in it's latest incarnation) I was presenting. And unlike the last time, I was receptive to their message. The difference was that I didn't feel this time I had the "final" answer, i needed their input,  and it was amazing what I heard.

As a marketer, my customers often are the end customer, but everyone on any project at any level has customers, and in most cases we all make the same mistake - we let our "vain brain" filter what we hear in order to reinforce our personal conviction (make us feel  better, thus the vanity) and as a result filter out valuable feedback from the customer. Two separate books I recently read about the human brain dealt with this phenomenon: the first was "How We Decide" by Jonah Lehrer (www.jonahlehrer.com/books) and the second was "A Mind Of Its Own" by Cordelia Fine (www.cordeliafine.com/a_mind_of_its_own.html). The phrase "vain brain" comes from Ms. Fine's book. Both books describe (and warn) of the human brain's ability to convince itself that it is right. And while this isn't all bad, it keeps one pushing hard on a problem that seems insolvable, if the brain is convinced it is right, it can filter out all opposing views, to its owner's detriment, which is what I did.

Both authors provide examples and come to a similar conclusion - the better one understands their "vain brain" the better one can identify the dangerous situation of extreme conviction. When you are most convinced something is true, you must also be most wary and questioning of the grounds for your conviction. In my case, my conviction was groundless (the project was barreling ahead to it's final milestones, one of which involved customer adoption) and because I didn't realize it, 6 months later I finally heard what the customer was saying. They didn't want the original project as presented. They told me so, so I sold them on its benefits, convinced they would come around once the product was released. Guess which one of us was right.

I have a minor obsession with the workings of the human brain, which began after an enlightening class in management in the last millennium. In that class I first understood the concept of projection, which is the human tendency to apply one's own motivations and convictions to others' behavior - in this case it was others I had tried to lead in a project. I couldn't understand that the more my project was falling behind and I shared this with a team member, it urged me to work late nights and weekends, but he didn't seem to be fazed. While I never suggested he should increase his hours or intensity I was frustrated when he didn't. It was only natural to work harder and longer when a project deadline was looming and we were behind - at least for me. My epiphany in that class was that I had expected that team member to react exactly as I would, which of course is ridiculous if you think about it. I didn't.

For more on the Apple iPod Shuffle story, take a look at this wikipedia article (en.wikipedia.org/wiki/IPod_Shuffle). The first generation Shuffle is best described as a white USB flash drive with music player buttons, released in January 2005. The second generation, release 18 months later, looked like an iPod Nano without a screen, although the controls were only tactile. That generation lasted until September 2009, 3 years. The third generation, released March 2009, removed all physical controls from the device, allowing only a special headphone to control it. It was so small that it should have come with a swallowing warning for children under 3. The fourth and current generation brought back the tactile controls of the second generation, but in a package 30% smaller and included the "voice over" feature of its control-less predecessor, arguably combining the best of the second and third generations. Oh, and the latest generation has 4 times the storage of the first generation device at half the price.

Progress doesn't always follow a straight line, but neither does the human brain. Frequent feedback and course corrections can make the path smoother, but only if you really listen. 

]]>
Fri, 09 Dec 2011 03:52:25 -0600
Insanely…Great…Powerpoint? http://www.cypress.com/?rID=46764 There's a book I've been reading on creating and delivering presentations that I highly recommend. In it the author analyzes and dissects how Steve "I invented the iPhone, iPod, iPad and Mac, heard of 'em" Jobs creates and delivers mesmerizing keynote presentations. In his book "The Presentation Secrets of Steve Jobs" (carminegallo.com/books/) Carmine Gallo presents 10 steps to "becoming Steve Jobs" on-stage:

1 - Plan in analog (Draw, a lot, don't powerpoint, think movie storyboards)
2 - Create twitter-friendly descriptions (Write your own headlines, use them often, others will too, so choose wisely)
3 - Introduce the antagonist (the easier to hate the better, like IBM and Microsoft, IRS is also a good one)
4 - Focus on the benefits (tell me what I will get and why I should care, and if possible testimonials from someone who already is benefiting)
5 - Follow the rule of three (organize and present three things/acts/sections at a time)
6 - Sell the dream, not the product (this means you have to make it dreamy, not spec-y)
7 - Create visual slides (the more text on the slide the more it will usually suck, use a picture whenever possible)
8 - Make numbers meaningful (draw the conclusions for the listener, help them understand the big numbers better, like "1000 songs in your pocket")
9 - Use "zippy" words (make the words understandable and more interesting, fun even, no jargon)
10 - Reveal a "holy $#!+" moment (build up to a memorable disclosure or revelation that the audience will gasp over)

The book was a good, informative read and interesting. The main theme was simplify the presentation/slides (a very hard task, so lots of examples) and make it more meaningful to the listener. Turn the listener into a participant.

Key concepts supporting the 10 points were: the 10 minute rule (people lose interest after 10 minutes, so plan changes and attention getters every 10 minutes), mix it up (include demos and other speakers, this also works into the 10 minute rule), and engage the audience (pass things around, ask for groups or individuals to stand and recognize them, make the audience a player in the show, so to speak).

If you haven't seen Jobs in action, download a keynote podcast off iTunes (the one earlier this year introducing the "magical" iPad is typical Jobs, insanely great). Here is a link to it for download or online viewing: movies.apple.com/datapub/us/podcasts/apple_keynotes/ipad.m4v.

If you want to see a little more on the 10-step tips (but don't want to get the book), there is a 10-slide deck summarizing this here: www.insight24.com/event/23/54/05/rt/1/documents/player_docanchr_1/gotomeeting_presentation_secrets_of_steve_jobs.pdf. But the book is a good read and really makes it all come together with specific examples (and comparisons between Jobs and Gates presentations). Enjoy!

]]>
Mon, 05 Dec 2011 07:29:42 -0600
It’s Not a Defect, It’s … http://www.cypress.com/?rID=51582 This phrase is a familiar, and everyone involved in firmware or software knows the familiar completion. That’s not what I’m writing about. 

I have been seeing lot’s of defects in both software and firmware lately, and I use lately very loosely. This isn’t a local phenomenon or a new one, but one that seems to be trending, fast, accelerating even, but not in the right direction. So if we don’t want a lot of unintended “features” in our software and firmware, what do we do?

I want to step back to the title, and consider different endings, that can hopefully begin the mindset change that can eventually reverse the trend. 

It’s not a defect, it’s … a CHALLENGE
 
Every defect that ships, even when fully characterized, presents a challenge to Sales and Marketing with the customer. While honestly and fully describing problems and workarounds helps a customer who has decided on your product, it presents a problem in “hooking” the customer, more struggles as they are “reeled-in”. Every item needs to be explained and rationalized by the salesperson as to why it will not negatively affect them or their customers. And a tremendous amount of trust is required in this process.
 
So even as the design team looks at little defects as relatively easy to avoid or workaround, every defect poses an obstacle to customer adoption. Consider the recent problems with Toyota and their accelerators (actually there were two main problems, mats that got trapped with accelerators and then accelerators that surged or got stuck). Every customer who has heard of this will have questions for the salesperson, but many more won’t even give the salesperson a chance to answer. So the challenge isn’t whether there is an answer to why a defect isn’t going to affect the customer, but that many customers won’t even give you a chance, based upon the list or number of known defects. And any defects found by the customer (whether documented in advance or not) will appear even more critical than those that are explained, that the customer has been cautioned about and directed how to avoid.
 
The central bankers (like Alan Greenspan) have referred to a similar economic condition – they call it headwind: a force you have to continually work against and reduces your efficiency. Defects are a headwind to business.
 
It’s not a defect, it’s … an OPPORTUNITY
 
In a manufacturing environment, defects are typically test escapes. When there is a test escape, after the problem has been properly contained and the production line continues, the next step is to find the root cause of the test escape: Was there a difference in the spec’d test environment versus the deployed test environment? Or did a test get skipped? Or was it a failure to understand the need for a test? As you can see, the answer to this first question raises many more questions (all beginning with “WHY”). Traditional root cause analysis ensues and based upon the learning, apply the lessons.
 
Now contrast this with a software environment. What is the first thing that happens in a defect review? In my experience, one or more engineers begin brainstorming about how to correct the defect and when they can roll in the fix, OR (depending upon the phase of the project and moon) a discussion ensues on how long the delivery will be delayed by even considering fixing the defect. Both of these responses are WRONG. The number one question isn’t IF or WHEN but WHY, and the opportunity presented by every defect is rooting out the causes of defects (there ARE common causes, but you won’t see them if you don’t look).
 
Deadlines are deadlines, but even as hard as I push to get software I am waiting for released as soon as possible, the presence of one defect is like the presence of one cockroach. The only viable “fix” to emerging defects is to take time to get to the root of the defect. And I also think this is a major learning opportunity for the person in whose code the defect was found. If a function I wrote has a defect (of course being in marketing this is only theoretical) it should then be me who determines the root cause of this defect (even before identifying the fix). The rest of the team helps to review and vet the root cause analysis, not to tell the “defector” why they think he/she had the defect but to help the “defector” come to this reason themselves. Do not just stomp out the one cockroach seen but identify where it came from so you can look for the nest, find and destroy the eggs. If you know why you got a defect, you will know how to avoid getting one next time.
 
It really is a feature, but in a bad way
 
The tongue-in-cheek response to defects as being features really IS true. If you find defects and only ask yourself is it a “feature” to explain in release notes or a “defect” to be fixed ASAP, you are defining your product by its defects. If instead you take advantage of the opportunity presented by finding a defect to root-cause it and therefore attack the root source of future defects, you are defining your product by its quality.

And it’s measurable. If you know the root cause you can monitor and measure/count the future appearance of the same root-cause-defects.
================
Methods and Madness
Jon Pearson

May 24, 2011

]]>
Wed, 25 May 2011 09:00:56 -0600
Increasing software quality: get rid of the bug list. Not! http://www.cypress.com/?rID=49793 I love to read Jack Ganssle, and his recent column is a great one: 
 
 
15 bugs away from being ready
Jack Ganssle
3/21/2011 1:57 PM EDT
Teenagers can learn only from their own mistakes. That seems true for a lot of software types, too.

 

Jack Ganssle asserts that since a Capers Jones study of late software projects showed that bugs were the biggest cause of late delivery, get rid of the bug list. This isn't as crazy as it sounds when you consider his justification that no one knows how long it takes to fix a bug, so there is no believable schedule when you have a bug list. To fix it, get rid of the bug list.

The (real) problem is NOT that you have a bug list, but what you do with it. For instance, if an organization has quality as its charter and is producing software, every reported defect is a quality "escape" and should be investigated. Great, where is the extra team to do this? Exactly. If software quality is the goal, you cannot rely on the developers alone. You must engage a quality team (and yes that does have two meanings).

I mentioned this a couple of posts back, when I cautioned "you get what you measure" for good and bad. If you make your goal a low number of defects (the length of the bug list) as the measure of product quality, in and of itself, you will get fewer defects reported, but not necessarily higher product quality. Similarly, if you measure productivity of the "quality" team by the number of defects reported, you will certainly get a high number, but again, in and of itself, a measure of nothing. So what is  a bug/defect list for? As Jack recommends, should we just throw it out and replace it with a commitment to solve bugs as they appear?

NO! Emphatically NO! A bug list CAN become a driver of quality, and if this is the desire AND a firm schedule for the development is also desired, then put a team (may be one person to start) in the role of Bug Detective. Is a new feature or enhancement reported as a "defect" (pretty typical)? Then the bug detective would investigate the source of this new feature, why marketing/product definer didn't include it, and MOST important, how to avoid the "defect" next time. Did a test fail? Why, what caused the failure and based upon the source, dig deeper until the root cause is found (NOT schedule, real work/task that was forgotten or omitted) and to increase quality, what is to be done different in the future. Did a failure get reported from the field? Why wasn't it caught in a test, AND why did the bug occur in the first place?

In a sense, Jack's recommendation is good: To increase the quality of your software and the predictability of project schedule you must eradicate the bug list, by driving the number of bugs to zero. If you have a product with hundreds of bugs reported and captured in a bug list, you have a product of unknown quality; it isn't high quality, but you cannot know how low the quality is with a big bug list. Stop and take back control of your project.


Jon Pearson
Methods and Madness, March 27, 2011
Happy Birthday Quentin Tarentino; living proof that dropping out of school after 9th grade and working in a video store can be a viable career development path

 

]]>
Mon, 28 Mar 2011 13:13:26 -0600
When you can't get it all done: QB, The Office Robot http://www.cypress.com/?rID=49133 Question: What's the difference between having a job in a weak economy (2008-2010) and a growing economy like now?

Answer: When you have more to do than time to do it AND the economy is bad, you are just happy to still have your job.

Things in the market have improved and in the touchscreen market it is continuing to explode - thanks to the smartphone revolution ignited in 2007 by Apple and the iPhone. And if you are part of this or any other current revolution, you know the feeling when times are booming and the work piles up faster than you can complete it. Like a "whac-a-mole" game where the moles haven't read the rules; they get bigger each time you knock 'em down and they bring their friends. If this sounds familiar you will be happy to hear about the coming invasion of office robots to help you out. Take a look at what/who you may be facing at your next meeting.

QB, the office avatar

You can read more in this recent story by Charles J. Murray in Design News:
http://www.designnews.com/article/512661-The_Dawning_of_the_Office_Robot.php

"The robot, specifically known as the QB and built by Anybots Inc., has that kind of effect on people. Looking like a cross between a Segway and an ET doll" - just what you were hoping to meet at the water cooler, in the hallway or across from you at the next meeting.

The designers vision is "to have the robot serve as an "avatar" - a replacement for a person who can't attend a meeting." The inventor Michael Clark of Anybots Inc., got this idea after producing a heavy-duty robot meant to help out with heavy lifting and finding them roaming his offices popping in on colleagues, aiding in person-to-person communication. According to Clark: "Everybody has meetings and every meeting has a break where people go out in the hall and drink coffee and talk. Speaker phones don't move, but an avatar robot [like QB] can. It can go out in the hall and allow you to talk to people. It does everything except drink coffee." And who wouldn't want to share their secrets with ET's better balancing mechanical cousin.

Take a look at QB in action:

 

Perhaps the best feature is that although the QB is short (also like ET) it's head can be raised on his extended neck (like ET?). So the "driver" can corner a colleague in his or her office, approach them silently, blocking the exit and then loom over them until they agree to whatever it is the "driver" is trying to get across. That is, of course, unless the colleague has replaced himself with a QB in his cube, while off at a meeting, perhaps somewhere like Hawaii. 

Perhaps version 2.0 will even allow battling.

Jon Pearson
Methods and Madness, Feb 27, 2011
Happy Birthday John Steinbeck
, Elizabeth Taylor, Ralph Nader and Chelsea Clinton

]]>
Mon, 28 Feb 2011 10:55:08 -0600
Worried about Tiger Moms? What about the Tiger Boss? http://www.cypress.com/?rID=48638 Author Amy Chua has raised voices from all points of the parenting spectrum in this country (perhaps the world, there were too many google hits to check). Her point in very simple terms is: if you work harder and longer you do better and go farther. One point she makes in her defense is that the world is a tough place and we have to prepare our kids to deal with it's harshness. While there is likely little disagreement with these themes, it is her methods that have people (parents, educators) up in arms.

As examples, take a look at these two incidents of note, one from Ms. Chua's upbringing and one from her experience as a parent. Both of these are extracted from a recent Time Magazine article.

1) When Chua took her father to an awards assembly at which she received second prize, he was furious. "Never, ever disgrace me like that again," he told her.

2) It was the "Little White Donkey" incident that pushed many readers over the edge. That's the name of the piano tune that Amy Chua, Yale law professor and self-described "tiger mother," forced her 7-year-old daughter Lulu to practice for hours on end — "right through dinner into the night," with no breaks for water or even the bathroom, until at last Lulu learned to play the piece….When Rubenfeld [Ed: Chua's husband, Jed Rubenfeld ,also a professor at Yale Law School, and LuLu's father] protested Chua's harangues over "The Little White Donkey," for instance, Chua informed him that his older daughter Sophia could play the piece when she was Lulu's age. Sophia and Lulu are different people, Rubenfeld remonstrated reasonably. "Oh, no, not this," Chua shot back, adopting a mocking tone: "Everyone is special in their special own way. Even losers are special in their own special way."

You may or may not agree with her techniques, but the truth is that no matter what you do, unless you are preparing your child for the future world, you are not doing your job as a parent. Ms. Chua has been compared to a coach, some would argue a coach with deranged methods nonetheless. Does the same mentality occur in bosses, managers, project leaders? Absolutely.

What I learned looking at this "Tiger Mom" is that such a person is not born, but is made over many years and experiences. In Ms. Chua's case it is obvious her family and the environment she grew up in shaped her (many google hits confirm this) and likely she also shaped her daughters who will in some way personify "Tiger Moms", not necessarily copying everything they experienced, but the themes will be there. So now for the "Tiger Boss". If you grew up with this kind of coaching, either literally growing up with a "tiger" parent or coming up the working ranks with a mentor, project leader or manager driving his reports to excel and accept nothing else, how do you think you will act or lead? The best take the best from their past, others may fail to discern the difference between what motivates and what destroys.

Unfortunately, most of the reviews of both Tiger Moms and Tiger Bosses will come down to a review of the results, and detractors will find plenty of negative results (bullying, harassing, tearing people down) while supporters will find the gems, the teams who against all odds survived and thrived while others failed, the diamonds found in the rough.

My point: while you cannot choose your parents, you can choose your leader, or more to the point you can choose to find another leader if you are not happy with the one you have. But that doesn't mean you should just when things feel rough, because at times it does take someone else to push you to achieve, to not give you the choice to fail. Coaches do it, parents can and leaders/managers can too.

 

]]>
Tue, 01 Feb 2011 02:35:02 -0600
Why Apple Mac downloads are faster http://www.cypress.com/?rID=48368 Think your computer is faster or slower than your neighbors? Or does it seem some programs or computers do things faster? You are right, it just SEEMS like they do, and this appearance is subtle, subliminal and very important to us humans.

Check out this 49 second video (sorry for the quick ad at the start, but even it's interesting).

It seems it isn't too hard to fool us humans into perceiving one thing while experiencing quite another. There is actually a ton of research going on in this area, and the findings are amazing. I am currently reading "A Mind Of its Own" by Dr. Cordelia Fine in which she trots out research findings (based on 20 pages of attributions in super-small font at the end of the book) that show many ways our perception and behavior are seriously affected by simple things (for instance, if you get a free nail clipper before being surveyed about your car, your views will be more positive).

The video above looks only at progress bars on a computer, and if you haven't used a Mac lately, Apple employs as a standard the pulsating, left-traveling scheme which that video claims will make us believe a download is 11% faster. Want to get your next task down 11% faster? Figure out how to add the equivalent of left-moving ripples in your status reporting.

As described, it seems ridiculously ludicrous that simply how you report something changes its perception. But think back to a painfully late project and remember the pain you had each time you had to report yet another delay. It is my experience that the later or more challenging a project the more I wished we could forgo the reports and instead put our energies into the work rather than trying to report on it. But experience also shows that as projects take longer the desire for progress reports increases and the demanded frequency increases.

Is there a way to apply the rippling download bar to a project's status report? I think so, and the way I see it is that each ripple can be viewed as a major task or milestone that is accomplished. Each task completion or milestone success that's reported and past becomes a left flowing ripple. Can you keep the "ripples" flowing left by reporting successes? And as the end approaches (and perhaps the end date is getting missed) can you increase the "ripples" by reporting on more tasks (and hopefully more often) by making the tasks more granular? The question is, does this really help?

If you look at this one way, the question is "Can I deceive the world that everything is fine by providing an illusion of progress?" I think this is unfair, but in the hands of a manipulator that is exactly what it could become. More importantly, though, think of this technique as getting the status or download bar more accurate. Have you ever seen a progress bar reach the end and then start over? I remember this happening frequently in one web browser I used in the past. How does it make you feel when the bar suddenly starts over? You lose confidence it in. What if the bar showed progress on downloading say 10Mbytes and then halfway through the bar slowed while the number of bytes (the goal) increased? You lose confidence again.

So for your next or current project keep in mind the study in the video. Perception of (better/faster) progress is really based upon how the status is presented. Also consider your own feelings when a status bar begins to deceive you on true progress. Then create a plan to collect and report your status so you can maintain the most positive perception (progress) while protecting the fidelity of the goal, the end of the progress bar. You just might avoid getting untimely "help" from management and instead get extra support for what your project truly needs - focus to accomplish the important goals. 

]]>
Sat, 15 Jan 2011 18:50:56 -0600
Want to know what Apple is doing? Here's what a little (Angry) Bird told me http://www.cypress.com/?rID=48240 The touchscreen market is still a wild west, and a major land grab continues where each manufacturer and even each project team is trying to one-up the other. At the same time, each wants a stable product they can take to market fast and with little risk, because you can't meet a competitor at high noon if your new weapon is still on the engineer's bench.

So what, you say? Well as a leader in the touchscreen component and solution we try very hard to both watch what others in the industry are doing and how manufacturers are using touchscreen technology. Bus this isn't easy to do. Teardowns can provide some information about the technology being employed, but how do you decipher the actual impact of one solution with an accuracy of 0.4mm versus 0.5mm?

As you can tell, that was a rhetorical question, since in the end, it doesn't matter (what?!) . What really matters is how this techno-phenalia is actually being put into action. This issue is not a "touchscreen" phenomenon but true of all products. How do you really tell how your latest technological advances are being used and therefore learn how to improve them? Follow the users, or in mobile application space, follow the birds, the Angry Birds.

For anyone who doesn't know, Angry Birds is the widely successful game from Rovio, a Finnish game developer. Over 50 million downloads worldwide, including yours truly. The best 99 cents I ever spent. Birds with various sizes and properties are loaded into a slingshot, the player pulls the slingshot back and aims it, and for most birds an additional touch is required to activate the bird's "special" power. It all works very nicely, intuitively on a touchscreen enable device (iPhones and Android, probably others, plus a version was launched for the Mac on the new App Store, but I don't know how well that works, yet). If you haven't tried this game yet, I highly recommend it. there are free LITE versions available.

So can you actually learn from Angry Birds? One thing I learned, by watching it happen, is that Angry Birds appears to put my iPhone in a very "active" state while the application is running, seriously draining my battery. What I can infer from this is that Angry Birds is keeping the touchscreen in a high-intensity active mode (the highest power mode) since the birds need an extra touch to explode or fragment or speed up (unleash the "special power") after they are released from the slingshot. What this tells me is that an end-device maker may expose certain capabilities right to the 3rd-party application developer that can impair the power profile I planned to have the device maker is use. What else this tells me, is that perhaps one simple "active" mode is not going to be sufficient OR that device makers need to be better trained on what capabilities to expose to applications OR that 3rd-party developers may also need training on how to get the performance they need in their app while preserving the end-user's battery time.

In case you may be inferring from this article that Cypress is the touchscreen controller provider for the iPhone, I must say that we do not release this information without the express permission of the device maker and that neither Apple nor Cypress has ever announced the use of Cypress' products in the iPhone.

The real message to take from this post: if you want to know more about how your technology or products are used, you need to review the popular uses of them, and infer from that experience how your product is or isn't being used, then formulate ways of improving your product in that environment or determine how to better inform your customers ways your product can be used for peak performance. Providing the best bag of tricks is only helpful if the user/device maker can and does take advantage of them.

So follow the birds, whichever/however they appear in your customer's products.

Jon Pearson
Methods and Madness, Jan 8, 2011 (Happy New Year!) 

]]>
Sat, 08 Jan 2011 18:21:34 -0600
Wrap up the old, ring in the new with Einstein http://www.cypress.com/?rID=47837 As I scramble to wrap up the projects of 2010 and begin to look forward to the pile of projects for 2011 (growing in scope and number like a cancer), two questions come to my mind - How did I get here? and What do I do next year to make it better? How better to "stand on the shoulders of giants" than to learn from Einstein, specifically from his quotes.

 

Five Lessons from Albert Einstein for 2011

"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction."

1 - In 2011, look for the simple, peaceful solution. In an email discussion, it might mean responding with a simple "yes" or "no" or if the "question" hides a request for action from you "yes (or no), but I cannot address it now". About the "violent" part, this is a bit more difficult, but in discussions and emails as well as project reviews, seek to "deflate" conflict rather than ride it or fuel it. Simplicity comes through peace and thought, complexity through stress and reaction.


"Imagination is more important than knowledge."

2 - In 2011, take a moment to imagine the result of an action before taking it. True, you may not "know" what will happen, but through past experience it is likely you do know many bad things the can happen. Knowing how to solve a particular problem may come after imagining life after the problem is solved.


"Anyone who has never made a mistake has never tried anything new."

3 - In 2011, make mistakes. allow others to make mistakes and do not fear mistakes. It is through mistakes we learn and teach our brains (the emotional, quick response part) how to better guide us in the future. More importantly, expect mistakes and plan to capitalize on them. This doesn't mean to stop paying attention to your work so it is littered with mistakes. Allow yourself to stretch and expect that in doing so there will be mistakes, so plan accordingly. Avoid plans and schedules that expect every piece of toast to land jelly-side up.


"Not everything that counts can be counted, and not everything that can be counted counts."

4 - in 2011, make a point to recognize the achievements that don't have numbers attached. In a culture that thrives on numbers, this can be difficult, so this advice may need to be set on its head. Instead of lamenting "uncountable" achievements, provide a way to count them. One simple, straightforward way is to share. This may mean writing it down in a memo or whitepaper or arranging a training session for others. Because...

 

"The only real valuable thing is intuition."

5 - In 2011, make intuition your goal. Intuition is what gets you through life faster and easier. Building intuition is really the point of all the other points above. When you understand something intuitively, you can be creative, when you don't, you will bump into walls repeatedly. Of course, learning from the walls you bump into (the mistakes) will help you build intuition. So will asking why, especially of yourself.

 

In 2010 I wrote myself a message on my whiteboard - "WHY ARE WE DOING IT?". This message was meant to force me back to ground zero whenever I got caught up in the tasks and lost sight of the mission. I thought if I (intuitively) understood why "we" were doing something I could better face the obstacles, mistakes and problems that came along, and more importantly, explain to others the actions we were taking. I will keep that message up as the new year turns over, hopefully taking it more and more to heart.

Of course, all of the above could just be the imaginings of a madman. When I looked into the source of the quote "standing on the shoulders of giants" I found this quote from Nietzche: "(progress) can only come from those rare giants among men shouting out to one another across the annals of time." May Einstein's thoughts one day reach the ears of another giant, but in the meantime, the view is terrific.

Jon Pearson
Methods and Madness, Dec 29, 2010

]]>
Wed, 29 Dec 2010 14:34:21 -0600
Holiday Best Wishes - Design Madness Style http://www.cypress.com/?rID=47534  Here is my Christmas card to readers (OK, it is someone else's original content, but I found it and am sharing it with you).

 

 

In this holiday season, remember that the gift we give our customers is the "beautiful" designs we create. The best gift you can give your customers is the freedom to do things previously unimaginable, at least by you.

When the rumors were swirling that Apple was working on a cellphone, there was a strong consensus response: Why? Everything that can be done with cellphones has been done, right? And how would a new player enter such a saturated market? At the time, all great questions, and with hindsight all wrong-headed. Apple delivered, ultimately, an application development environment and a application marketplace where developers can do virtually anything (even Steve Jobs has been surprised at the uses for his iPad, and in one case uncharacteristically refused to take credit). When you are creating something, anything, you can be sure it will be used in ways you never imagined. Design with that mindset and be delighted at the results.

So my wish is simple, besides taking time this season to reconnect with friends and family, consider all the lives touched by the work you do, the products you help make, and believe that somewhere someone is using it in a way that would make you cringe or explode with laughter.

Now and in the coming year, may you create beautiful designs and magical products, and may your customers use them in completely unintended ways.

 

]]>
Fri, 17 Dec 2010 10:19:57 -0600
The Reptile Mind http://www.cypress.com/?rID=47355 Consider these quotes I gleaned from the internet concerning the most primitive core of our brain/thinking:

1) "First and foremost among the traits generated through the reptilian brain is the drive to establish and defend territory." (www.bibliotecapleyades.net/sumer_anunnaki/reptiles/reptiles14.htm)

2) "It carries out a set program of behavioral responses, when presented with certain external triggers. It does not learn from its mistakes, and understands only images, not language." (www.eruptingmind.com/reptilian-brain-triune-model/)

3) "The reptilian part of the brain developed very early in the evolution of our species and gave us an enormous evolutionary advantage. It enabled the earliest reptiles to make primitive but vital choices- the reptile asks only three basic questions of any thing or situation it encounters:
1. Can it eat me/hurt me/kill me?
2. Can I eat it?
3. Can I have sex with it?
If the answer to all these questions is 'no,' then the object is deemed to be a 'rock,' and no further notice need be taken of it." (
www.sedonavortexconnection.com/SVCMonthly/Articles/Current/Reptile.html)

One day last summer as I was walking and thinking, I came across a tiny garter snake - only 6 inches long with a head the size of my smallest finger. Intrigued, I engaged it, blocking its way with my shoe, and watching the snake's reaction. The first couple of times it changed direction. About the third or fourth time, the snake instead decided to hold its ground and attack - my shoe. Needless to say the snake's attack did nothing to me (or my shoe), and when further engaged the snake continued to attack my shoe.

This illustrates the "reptile mind" and is line with the quotes above. My shoe couldn't be eaten, nor could it serve as a mate, and therefore the snake determined "it (shoe/human) could kill me (snake)". But its response was to first try and divert slightly, briefly, but still headed toward the original objective, to get to where it originally wanted to go. When that didn't work, it settled into defense/attack mode.

We share this "brain" with the snake (according to neuroscientists) and it probably is the same size in us as in the snake - only 4 grams. This is the basic and instinctive part of the brain, and we use it all the time. When you step out in the crosswalk and then see a truck coming, do you stop to consider the best approach to this stimulus, weighing the pros and cons? No, without a thought you jump back to the curb or sprint across the street. And this part of our brain is getting first crack at stimulus all the time. Even in design reviews.

So the next time you are in a review and feel yourself or your "victim" drop into defense mode, stop, think and try to re-engage the thinking brains (theirs and yours). That might mean trying to understand why the other person has stopped thinking and try to bring them out of reptile mode (or start the conversation so together you can get out of this mode). It is harder to get yourself out of this mode, on your own, so help others so they can later help you. 

]]>
Tue, 07 Dec 2010 09:29:02 -0600
Did I write that? Cool! http://www.cypress.com/?rID=47056 Here is my confession: my memory is great and terrible. Not in the way the Russian tsar Ivan IV Vasilyevich was great and terrible (see en.wikipedia.org/wiki/Ivan_IV_of_Russia for more on Ivan the Terrible). My memory is simply great in that I can store and categorize vast amounts of sometimes trivial information (did you know Alan Parsons engineered the Beatle's Abbey Road album?) and it is terrible in that once stored I often lose details about the data, such as who said what at what meeting on what date. When I am asked the same question twice with some time between the askings, it is likely my answers are not identical. And don't come back to me and say "Remember what you told me last week?" because I will definitely not, not without some contextual prompting.

I went through one of these inverse deja vu experiences this week, after an editor sent me a copy of an article of mine they are publishing, titled "Creating Embedded Systems with Changing Requirements". I originally pitched this article over a year ago, and finished my writing of it probably 9 months ago. But when I reviewed it today as a final editing step before publishing, I had a hard time remembering writing all of these exact words. Certainly there wasn't anything I couldn't have written, but unlike a song, one doesn't re-recite an article over and over to reinforce it. Another way to say this, given the exact same outline and figures 6 months apart, I would write two very different articles.

So what? The reason I share this is that I have recently accepted the terribleness of my memory and the need in today's environment to be able to recall not just what, but when and with whom something is said or agreed upon. We work today in such a group-driven way, and the groups are meeting often through email across many time zones, and not everyone always gets the same context. So I have become a strong proponent of detailing my work in a notebook/diary form.

Mind you, proponent and proficient are not the same thing. I realize the huge benefit of detailed notes, but also know that for me, when I am writing about what was just said, I am not listening to what is being said. So I have to balance what I write and when I do it in my notebook with participating actively in the meeting discussion. Taking notes is somewhat easier when I travel to meet with customers where they speak another language. Those meetings provide more opportunities to put you head down and scribble down a few phrases, while a local language discussion is under way.

But in just the course of writing this I have also realized that since so much happens outside meetings, through email and conversations, my notebook diary approach is still flawed. I guess I really need to keep it open all the time, jotting references and notes and dates and names whether the exchange is in email or phone or the hallway. Another challenge in moving from proponent to proficient. Wonder if there is an automated way to do this?

Last year I heard about an automated memory system called SenseCam, which essentially records everything that goes on. The company Vicon is trying to commercialize it (read more here: http://www.zdnet.com/blog/storage/dear-diary-i-did-what/833). There is a motion detector and a camera and it essentially takes short snap shots of your life as it happens. Can't remember if you get the audio associated with the snaps.The target audience is alzheimer's sufferers, and the notion is that reviewing pictures of past events, where the past is even the last hour, improves recall. This product is an interesting beginning, and I do feel as we are bombarded more each day, something along this line will need to be incorporated in our everyday life. 

I just hope it has a great online help manual :) 

]]>
Sun, 21 Nov 2010 18:51:40 -0600
I'm gonna' turn you into a rabbit http://www.cypress.com/?rID=46976 Humorist David Sedaris recently published the book "Squirrel seeks chipmunk", where the author excerpts, extracts and extrapolates the situations humans find themselves in and their behavior by writing tales where animals take on the actions and affectations rather than humans - he calls it a "bestiary" mostly because he never quite figures out the moral, and therefore he couldn't call them fables. And it works because not only does it hit home, but it lets one see familiar (usually bad) behavior one level removed, with painfully familiar animals acting out. (Full disclosure: I haven't read this yet, but I want to based upon what I have heard and read about the book. All points still apply, and maybe I can re-post after I do read it with some more comments.)

NPR's Morning Edition (Sedaris is a frequent guest and contributor to NPR) covered the book's release with an interesting segment not long ago that spoke with the author and about his book (listen or read it here: www.npr.org/templates/story/story.php). One particular exchange in the story struck me: "…one of the many flawed creatures in Sedaris' bestiary sends up a bully of a security guard he encountered at an airport security check. 'I just looked at her and I thought, 'I'm gonna turn you into a rabbit,' ' he remembers. 'So, I wrote a story about a rabbit who's put in charge of security in the forest.'"

Sedaris was able to concentrate on exploring the behaviors of individuals (probably many who were close to the author) by giving new names and species to the misbehavers. He separated the "products" of these individuals in a way that they could be examined without having to apologize to "them". Design and code reviews, too, are meant to focus on the product rather than the producer. As humans, at the top of the evolutionary brain chain, we say we can, but we seldom really succeed at effectively separating the two. What you see/hear is sometimes less important than who it comes from. How can this affect the results of a design or code review?

Positively: if you know and understand the quirks of the producer, you can adapt your review to catch his/her weak points. You can also save time not over-reviewing areas of strength.

Negatively: if a person's reputation is strong and positive and precedes them, the reviewers may not focus on the very details they are there to examine, thinking "this" person can't make "that" kind of mistake.

So I cannot help but find myself hoping for a situation where "squirrel" or "rabbit" are the names of the coders and the group of reviewers knows nothing about the coder, and instead they focus in on the details without prejudice. But what if the label "rabbit" meant more like in the old fable by Aesop instead of Sedaris' TSA clerk - where the character was quick but a bit sloppy and impulsive? Wouldn't that characterization help a reviewer know what to look for in his/her code?

The truth is, in most design groups the numbers are small enough that we will know each person well enough to let their character prejudice the review, but we must be keenly aware of those prejudices - and perhaps even invite input to break them when necessary or reinforce them when justified. If possible hold a pre-review meeting; let the producer/coder comment on what he/she thinks warrants a closer look, what have been problem spots in the past; then hold a group discussion without the coder about how the group can concentrate their review time. Sort of a review strategy session, determining how to maximize the results, but with a conscious view to what/why they are reviewing, trying to ferret out unproductive prejudices. As I consider past reviews and checklists I have been through, seldom (if ever) has a review consciously proceeded this way.

Modern marketing relies on our failure to isolate product from producer, building up and selling the brand first and foremost; product(s) usually come secondary or great product remain a "best kept secret" when there is no brand marketing. This works in general, but only if there is equal attention to product as marketing, and eventually fails when the product does not satisfy. But when it comes down to each human, the product almost always has to stand on its own every time, and the reviewers of code or a system design are the only safety net available.

Don't let "brand" blind you.

]]>
Thu, 11 Nov 2010 20:31:42 -0600
A beautiful design unseen http://www.cypress.com/?rID=46836 This week, Apple Inc reported a record $20B in revenue for the 3rd calendar quarter of 2010, a record. in addition, they reported $4.31B in profit. 43% of Apple's revenue comes from the US. Compare this to today's report from Boeing of revenue of $17B and profit of $0.837B. How does this happen for a company that until the iPod (and iTunes) was supported by Windows computers in October of 2003 produced a respected but niche computer called Macintosh? DNA, and that DNA comes from Steve Jobs himself.

There are a couple very interesting accounts of the man/myth of Steve Jobs, here is a video from Bloomberg's "Game Changers" series that is informative and entertaining: www.bloomberg.com/video/63722844/

In the the video you hear about Jobs' detailed focus and drive, expectations of more than anyone on his team ever believed they could deliver. My favorite quote in the video is from Guy Kawasaki who says "if you are a product manager for a product being announced, the preceding 3 months are hell and on the day it is all over in 10 seconds."

Even better is this transcript from an in-depth interview with "the CEO who fired Steve Jobs" John Sculley: www.cultofmac.com/john-sculley-on-steve-jobs-the-full-interview-transcript/63295

The best quote I found to illustrate why Steve Jobs succeeds in doing what often appears to be windmill-tilting is this:

"…the great skill that Steve has is he’s a great designer. Everything at Apple can be best understood through the lens of designing. Whether it’s designing the look and feel of the user experience, or the industrial design, or the system design and even things like how the boards were laid out. The boards had to be beautiful in Steve’s eyes when you looked at them, even though when he created the Macintosh he made it impossible for a consumer to get in the box…"

For Jobs,EVERYTHING has to be beautiful, has to begin with the design. Even detractors will say Apple's products are "beautiful but…". The truth is that you can systematically remove the "buts" from a product (or the pimples from the butt of a product), but true beauty and impeccable design can rarely be bolted on. In Jobs' mind, the beauty is as connected to what is left out even more than what is included (he was known to have a huge home with almost no furniture).

Want to see the "new beautiful"? Check out the new MacBook Air video: www.apple.com/macbookair/#macbook-air-video. Even the designer's voice in this video can be called beautiful (OK, maybe Phil Schiller doesn't push the envelope on beautiful). Imagine all the things that had to be left out to get a 13" laptop that is 0.68" at its thickest in the rear and all the way down to 0.11" thin in the front. Now look at the computer you are reading this on and note its "beauty".

Why bring this up in an embedded design methods blog? Only one reason - beauty, like good design, is NOT only skin deep, it is deeply embedded in the DNA of your product or project. Ever get the comment during a code review "Hey, this code is beautiful!". Shouldn't that be one of our goals.

]]>
Wed, 20 Oct 2010 18:05:43 -0600
Imitating Art Imitating Life http://www.cypress.com/?rID=46735 What makes a strong impression? Tony Blair (former UK prime minister) reported in his recently published memoirs the details of a conversation with the Queen. Problem is, that particular conversation only occurred in the movie "The Queen" and was fabricated by the movie's screenwriter.

Just because a person hasn't personally experienced something doesn't mean it can't make a lasting impression. Brain neuroscientists tell us that experiences are especially memorable when several senses as well as emotions are involved. In the book "Brain Rules" (www.brainrules.net) John Medina describes an experiment where one set of students studied vocabulary the ordinary classroom way, while another studied while eating intensely aromatic popcorn. Later, when tested the students who both studied in the presence of popcorn and were tested in the presence of popcorn as a group performed the best.

So how do we apply this thinking to design and our projects? One way that comes to mind is to help the entire design group experience the trials, tribulations and celebrations of key parts of a project (which are usually suffered by individuals). For instance, if one team member experiences a particularly frustrating bug, especially one which is best prevented early in design or review, contrive a way for the group to experience this through a leading discussion/exercise re-constructing the circumstances around the bug and it's discovery (in some venues this might be called role-playing). How do you integrate more senses? How about fresh brewed coffee and cinnamon rolls in the room (or pizza and hoppy ale), but in order to get some the participant must achieve a learning milestone, ultimately finding the bug and solving it or suffering without the treats (but having to smell and salivate over them) until the end. Why pizza and coffee? What do we consume copious amounts of in the throes of a project death march?

Can we do this for every lesson? As they say in accounting, that depends. What increase in quality and decrease in resources is achieved in learning the lesson? The larger the magnitude of the benefit, the more time that should be invested in developing the learning and cementing it with all senses. How many times have you seen the same mistake made over and over, but the discovery and correction of it is separated so far from when it was made (or could have been first discovered) that the cause and effect relationship is lost on the learner.

If all this sounds like too much to take on, consider this possibly easier approach: when bugs or mistakes are found, rather than simply disposition them to the team and move on, spend a few minutes right then and there to ask the following questions: 1) how was the bug uncovered? 2) when could the bug have been uncovered earlier? 3) how can we prevent introducing the same bug next time or finding it earlier? Perhaps one more even more important question to ask: What if we shipped the product with this bug? 

]]>
Tue, 12 Oct 2010 12:22:56 -0600
When something exceeds your ability to understand how it works, it kinda becomes ... DANGEROUS http://www.cypress.com/?rID=46545 At the launch of the iPad early this year, the Apple product designer Jony Ive made the statement: "when something exceeds your ability to understand how it works, it kind of becomes magical". And so was launched the keyword for the iPad: MAGICAL (You can see this for yourself in the first 10 seconds of this product introduction video: www.youtube.com/watch; or for a satirical look, check out this video: www.youtube.com/watch).

While "magical", "phenomenal" and "insanely great" may be appropriate wielded by a black-mock-turtleneck-and-jeans-wearing titan of silicon valley, when it comes to explaining an embedded system better descriptive words to use are "simple", "straightforward", "deterministic", "safe" or "bulletproof". And yet, when most of us get a chance (or are forced) to present a design up to management or out at a customer or to a peer, we usually so complexify and obscurify the design that by the end, the best closing statement should be: "I think you must agree this is truly magical".

Don't get me wrong, I am NOT confusing simple with easy. Embedded designs are complex and they are VERY hard to get right, but if you cannot explain your design in a understandable fashion, not just to upper management but to a peer or team member, you have likely signed yourself up for late nights and weekends of ferreting out obscure bugs. And making a complex and tangled web of features and requirements into a simple(r) design is even harder. But if you can explain it you can properly review it with senior and junior team members and rather than just meet a milestone checkbox with that review you can get valuable time- and hair-saving feedback. If you can explain your design you can quickly give someone enough information to join your effort and contribute (which means know where to start to learn what he/she needs to know to begin to contribute, rather than spend all his/her time complaining about spaghetti code). Practice on your dog, since they have an attention span of about 15 minutes, while in humans it is only 10 (see this very enlightening article, not sure they were just talking about dogs though: capitolk9.blogspot.com/2009/04/dogs-attention-span.html).

Now if you can explain your design so that upper management can understand (which takes an understandable design AND an understanding of the design AND a conscious choice of words that together that can be understood) you will more likely get more time/space when you need it to solve a really tough problem and REAL help when you cannot solve the problem on your own. Managers all the way up like magic, too, but getting a project working reliably with sufficient verification within the schedule alloted is "TRULY MAGICAL".

That said, and speaking as a true nerd who is gadget obsessed, the Apple iPad really does look and feel like much more than just a music-and-video playing, book-reading, net-surfing handheld tablet computer. I probably wouldn't say "truly magical" too often, but that's just me. And you know what else is great about it? The iPad size and shape fits well between the supports of my Volvo's steering wheel. So now I can watch a movie and drive at the same time. 

]]>
Thu, 30 Sep 2010 10:58:40 -0600
Rethink Reboot http://www.cypress.com/?rID=46405 Quick test: the gadget in your hand is acting up, what do you do? Restart it, right? If it is running Windows you give it the 3-finger salute. CTRL-ALT-DEL, reboot, restart, cycle power, they all mean the same, and since the dawn of the computer age, this is the number one troubleshooting guideline for any problem. But have we accepted this "behavior" too easily?

There is a massive difference between rebooting an MP3 player and having to reset a pacemaker, but at the heart of it, are these two so different? Both are embedded systems; both have specific, regular and irregular inputs; both are useless if they cannot produce their output. So what is the biggest difference between them? One MUST NOT get hung-up and require a reboot. One will take any opportunity to shave off unnecessary design and test steps to save money and schedule. Hold on, why don't both statements apply to both products. Shouldn't every project eliminate unnecessary design and test steps? Why shouldn't every product  just continue to work?

So the real difference is what we, the consumer have deemed necessary when we vote with our dollars. Which answers my earlier question: YES, we have accepted this behavior too easily AND now it is time to rebel.

Great, how do we get started? Since we are the designers and testers for these products we need to start looking for these bugs and once found, rate them high and get them fixed BEFORE we ship. We need to learn from the bugs found and eradicate the root causes in our designs before the tests.

How can we do this? For PSoC and many microcontrollers, the WatchDog Timer (WDT) is the primary mechanism - but it is also the "savior of last resort". Once a watchdog times out, that's it, a reset is still required; it is only marginally better if the controller takes care of it rather than the user. 


So what is a better way to use watchdog timer? Two ways: 1) As a diagnostic resource during development and test; 2) As a way to recover gracefully, provided enough bread crumbs were left before the reset. These are not mutually exclusive, use both. It is essential to plan a WDT reset recovery plan and design in the "bread crumbs" in order to recover as gracefully as possible. But we also need to design in the diagnostic "bread crumbs" that will help in seeking out WDT-reset-inducing situations before they "shoot the engineer and ship it". And if someone in the field finds a  failure, those same bread crumbs will help identify the source and eradicate it.


Sounds simple. But it's not easy. It's simple to say: "Don't let the device get into a state where the user needs to reset it, and if the device resets itself, make sure to get the user back to where they were". Adding this to any requirements document is simple - seeing it through is not easy. But you have to start, and the place to start is reboot your attitude. 


Forced restarts mean you failed the user. Plan to succeed.

]]>
Sun, 26 Sep 2010 15:33:54 -0600
There will be zombies http://www.cypress.com/?rID=46323 I am a big movie buff. Not a true movie nerd like Quentin Tarentino, but I like seeing as many as possible and probably think about them too much. I also like to read, every day. Although I read few "true" classics, I don't just read mindless stuff either: some sci-fi, some fantasy, but also thrillers and other mysteries, and if it has a hint of science, all the better. Some of each genre I consume could be called classics. 


I have noticed that year after year new books and movies come out featuring the once-dead-but-not-so-much-now -> Zombies. Never just one either (book, movie or zombie). And while on the surface you might dismiss these releases as just more of the same theater-seat-filling drivel, why do they keep being made and consumed? Because there is always another way to tell the story. And like any good story, it will fail or succeed in the telling.


Just like all classic stories/plots, including the classic romance; 1) boy meets girl, 2) boy loses girl, 3) boy wins girl back (exchange any/all instances of boy or girl as you wish), the zombie story line is also predictable:  1) dead rise from graves and seek living humans, 2) a small band of the living battles the undead, while one or more gets bit, turns zombie and must be terminated, 3) a much smaller band of remaining humans gets the upper hand and wins the day - at least one more day. 


As with all formulas, someone will try to bust it by changing some elements, often in the third act ("Shawn of the Dead" is awesome at this [SPOILER ALERT]- in the end Shawn's best friend has become a zombie and Shawn keeps him on a chain in the garden shed so they can still play video games together [END SPOILER]).


In the course of less than a month I saw 2 zombie movies (Shawn of the Dead and Dance of the Dead) and read a book called Xombies ("X" sounds the same as "Z") and while I subconsciously knew the above sequence would play out, and when I was done watching/reading I could see how it had, all three experiences were enjoyable because they succeeded despite their formula. These stories succeed by telling this tired, formulaic story better, introducing unique twists along the way. One zombie movie I saw in the last year really thrives in the formula while at the same time being extremely fresh: the Norwegian movie called Dead Snow. This movie includes Norwegian punk rock music and WWII Nazi zombies along with other twists. You can watch it dubbed into English or in original Norwegian with subtitles.


So what's this have to do with design? Simply that the same (tired? old?) patterns play out time and time again in our projects (for instance, if you have two asynchronous processes or tasks, a race condition is highly likely), and the better you recognize the pattern, the better and more efficient you can "tell" the story with your code (with 2 asynchronous tasks, a semaphore is required or a really intelligent design that won't need a semaphore, and a known reason why not). 


If you, like the cast in the movie Dead Snow, seems to flaunt the elements of one of these pattern (Dead Snow mocks it's predecessors "The Evil Dead" movies), you can get blindsided by not paying attention to the rules of the formula - until they bite you, and now you are simply a plot element in Act 2 of your own zombie tale. 

]]>
Sun, 19 Sep 2010 17:19:39 -0600
Everything's FUBAR. Oh, wait, now it's perfect. http://www.cypress.com/?rID=46206 We engineers pride ourselves on sweating the details. But when you are head's down on a detailed problem, it is easy to lose sight of the big picture. And since engineers build an emotional attachment to their projects, one detailed bug can completely obscure thousands of successes. So that's why we seem so bipolar when it comes to accessing a project. It is either amazingly rosy or irretrievably screwed up.


I've also seen (never been guilty of this) the situation where "I don't understand how this code works" quickly becomes "I don't understand how this code could ever work". It's that emotional attachment again. So what's a manager to do?


There was a recent online post by Chuck Hill (www.eetimes.com/electronics-blogs/pop-blog/4207358/How-to-train-your-boss-in-the-proper-bug-etiquette)that started with the above phenomenon and took it to its absurdly logical conclusion: You must train your boss to only ask for status during the correct "pole". I disagree, I think there are several possible approaches, so let's look at the alternatives:


A) Train your boss to wait for status until you are smiling: this is Chuck Hill's conclusion, which means one of two things; either 1) I (the engineer) will act like a blathering idiot while debugging a hard problem, so the only way to find out true status is to catch me between bad bugs, or 2) managers only want the good news.


B) Train yourself to look for your boss's smile (the right kind, not the evil "take over the world" one) before dumping on him your latest unsolvable bug and the inevitable cataclysm to come. This is the corollary to A) above, and while managers may want to hear only good news it is much more fun to deliver bad news when they least expect it.


C) Prozac: don't let anything excite you and go along your merry way with a lot of other lemmings. It is what it is and that's all it ever will be.


D) Understand that projects and people are both complex organisms, and one-liner reactions or status will never give a good picture. Both the engineer and the manager of engineers must understand the question asked and the answer given are not the end of the story. What is the question behind the question and the answer behind the answer, as well as the answer (expected) behind the question and the question (not asked) behind the answer.


Don't you hate it when tests are constructed to make the right choice obvious; it is of course letter C (although generic substitution is allowed).

]]>
Wed, 15 Sep 2010 12:38:15 -0600
Remember the 40-hour workweek? http://www.cypress.com/?rID=46150 In the afterglow of Labor Day weekend and the lazy days of summer, I found an interesting mini-program on NPR's Future Tense with the title above (futuretense.publicradio.org/episode/index.php). This short audio program talks with Maggie Johnson, author of Distracted: The Erosion of Attention and the Coming Dark Age (maggie-jackson.com/writing/), and in this as well many more resources/interviews online Ms. Jackson discusses two things mainly:
 
1) the "technologies" we have today can/are consuming us 24x7 (brilliant insight, right?) and the result is we are NOT ABLE to pay attention (I know I can feel like an ADHD elementary school kid some days when I let my email, etc. control my behavior), BUT
 
2) we CAN learn to pay attention, and there is even a new science of attention.
 
Coming off a 2-week vacation, I felt behind when I returned and have been spending extra time and energy to get (or feel?) caught up. And this was even after I had mostly kept my email attended to while I was gone. But vacations in the past (I had 2 weeks off in June as well) where I have tried to take a vacation from email as well, while being a good 2-week break, I REALLY ended up feeling underwater. In fact, I just worked this weekend the unread emails from my June vacation down below 1000.
 
There are a great many advantages to instant/always in-contact supported by our blackberries (et al) and cell phones that can reach us anywhere. And undeniably the shrinking globe and it's affect on projects and products has led to the 24-hour workday. But a friend pointed out that 40 years ago Dale Carnegie presented a seminar that addressed pretty much the exact same issue: how our workload and urgent tasks had increased due to all the advances in technology - the telephone.
 
So I leave you with this question (and apologize that there isn't a way to add comments below, the feature is coming soon):
What would your life look like with a true 40-hour workweek?
]]>
Sat, 11 Sep 2010 17:46:44 -0600
Celebrate yourself this labor day http://www.cypress.com/?rID=45861  I have been reading a great novel lately, Dennis LeHane's The Given Day. It isn't about football (that was a movie called "Any Given Sunday") but it is about the labor organization push in the early 20th century, told from a Boston police officer's point of view. I cannot vouch for the historical accuracy, but based upon taking much of it as near-accurate I would say laborers (anyone who labors, and that's you and me as we draw paychecks) have come a long way.

This weekend in the US is set aside to say goodbye to summer but more importantly to remember on Monday the labor force. I have had a great two weeks off celebrating the fruits of my own labor, I just want to wish all of you a happy Labor day and ask that you raise a toast to yourself.

Cheers!

]]>
Thu, 09 Sep 2010 18:19:09 -0600
What's more important: Implementability or Testability? http://www.cypress.com/?rID=46050 Executive summary: Testability is more important than implementability.

In a previous post (generated in response to my back and forth with Jack Ganssle on requirements) I listed my rules on what constitutes "good" requirements, which as a review are:

1) non-ambiguous identification (which I assert is gained by using "shall" only in a statement that is a requirement), and
2) the statement is implementable and testable.

Of course, usually different people are involved in testing and implementing, so #2 should be split into 2a (implementable) and 2b (testable). NOTE these are BOTH needed. A requirement "stinks" if it is either not implementable or not testable. BUT is one more important than the other? ABSOLUTELY

A friend (thanks Dennis!) pointed out a classic case of product-induced accidents that highlighted the dangers of software control of safety-critical systems - the Therac-25 radiation therapy machine. The following description has been excerpted, with only minor edits, from Wikipedia (en.wikipedia.org/wiki/Therac-25).

"The Therac-25 was a radiation therapy machine produced by Atomic Energy of Canada Limited (AECL). It was involved with at least six accidents between 1985 and 1987, in which patients were given massive overdoses of radiation, approximately 100 times the intended dose.

"The machine offered two modes of radiation therapy:
1) Direct electron-beam therapy, which delivered low doses of high-energy (5 MeV to 25 MeV) electrons over short periods of time, and
2) Megavolt X-ray therapy, which delivered X-rays produced by colliding high-energy (25 MeV) electrons into a "target".

"When operating in direct electron-beam therapy mode, a low-powered electron beam was emitted directly from the machine, then spread to safe concentration using scanning magnets. When operating in megavolt X-ray mode, the machine was designed to rotate four components into the path of the electron beam: a target, which converted the electron beam into X-rays; a flattening filter, which spread the beam out over a larger area; a set of movable blocks (also called a collimator), which shaped the X-ray beam; and an X-ray ion chamber, which measured the strength of the beam.

"The accidents occurred when the high-power electron beam was activated instead of the intended low power beam, and without the beam spreader plate rotated into place. The machine's software did not detect that this had occurred, and therefore did not prevent the patient from receiving a potentially lethal dose of radiation. The high-powered electron beam struck the patients with approximately 100 times the intended dose of radiation, causing a feeling described by a patient as "an intense electric shock". It caused him to scream and run out of the treatment room. Several days later, radiation burns appeared and the patients showed the symptoms of radiation poisoning. In three cases, the injured patients later died from radiation poisoning." (end of wikipedia excerpt)

The conclusions of a safety review commission showed that although there were several coding errors found, the root cause of the failures was the design, and more specifically that the design made it "relatively impossible to test is a clean automated way". (from same wikipedia article)

In order to ensure a quality product, it must be tested, and the extent to which it can be tested will directly impact the quality. The goal is to find "all" defects (defined as 99%, or 99.9%, or 99.99%, etc. as required by the criticality of the system) before shipping to customers - for safety- or mission-critical systems the implications of "defect escapes" can be catastrophic.

So back to requirements, how does this impact our requirements writing? I have a recent project's experience fresh in my mind and have formed my own opinion. I believe that the testability, test planning and definition of the test system are MORE important than the implementation. Of course, the implementation is important, and a poor implementation will lead to a poor product, but you need to have high confidence that the testing can and will find out whether the implementation is good or bad.

So extensively review the requirements from a testing-point-of-view (best if the person/team responsible for testing does this) and go as far as defining or designing the test system required. The key benefit of having the actual test system up and running and available during the project implementation is that both the design and the test teams can take advantage of it. And when the test team begins to find defects, the design team can run the same tests and do their detailed debugging using the same environment.

Beware of "hidden" untestable requirements. Because for those the only test force you will have is a large (and possibly fleeting) customer base. 

]]>
Wed, 08 Sep 2010 11:53:54 -0600
Why should embedded developers learn to program iPhone? http://www.cypress.com/?rID=45785 I have been intrigued with programming for a mobile platform for quite awhile, but before the iPhone SDK was released to the public, it only remained intrigue. When I saw the velocity that applications were begin developed for iPhone (and iPod Touch, since they share most of the same capabilities), intrigue inched its way towards action (or perhaps millimetered its way). I began watching (passively) the Stanford lectures on iTunes, even collected them all. Then during the last Cypress shutdown for Thanksgiving, I jumped in, got a couple "Hello World" apps out of the way and began reading Apple's voluminous documentation. That led to actively participating in the Stanford iTunes lectures and finally seeking out books. 

 

After several books, I found for me the best one is Head First iPhone Development (oreilly.com/catalog/9780596803551/). What is different about this one is that it starts right away doing a real app (and gently tips its hat at "Hello World") and eventually starts to build up an application, chapter by chapter, to include database and even the camera along with other iPhone staples like navigation controllers, data pickers, button actions, and animation. Another great thing about this book is how they present the material (it is a "Head First" characteristic) where there isn't just "lecturing" and "examples" but many different ways to get to the point (questions, interviews, try-it examples, even some that crash).

 

But back to the big question, why should an embedded developer whose mainstay is "C" and assembly and board support packages care about learning to develop for the iPhone? There are three reasons for an embedded developer to learn to program the iPhone:

 

1) It stretches your gray cells,

2) it is the future of development, and

3) it is the future of embedded. 

 

Stretch your gray cells: One way to guarantee you get stale (less-marketable) is to do the same thing over and over. As a contractor, when I had a choice I always picked my next assignment as different from what I had done before. Develop sometimes, test sometimes, 8-bit sometimes, 32-bit sometimes, different languages, different industries, etc. And by doing this I have gotten comfortable with the idea of change (and a little restless, too). The iPhone/iPod Touch offers a reasonably priced publicly available platform with readily available extensive tools (free SDK from Apple). And the patterns in iPhone are certainly different from those most embedded projects employ. Which breaks your mind out if its rut and might just give you a new perspective on your next embedded project. Besides,  showing off an iPhone app you developed surely leads to a date faster than describing your latest 8051 interrupt handler.

 

iPhone development is the future of software development: Objective C is the basis of iPhone development, but that isn't all there is to iPhone development - it is the frameworks. Cocoa Touch, Core Data, UI Kit, Quartz and Open GL are just some of the pieces that are combined by the iPhone SDK. What this does to an embedded developer is force you to learn how to use what's been done already rather than inventing it yourself (you know who you are). And there are many rules to learn in order to allow the frameworks to evolve as devices and features also evolve. This kind of development may be less foreign to PC app developers, but for a typical embedded guy/gal this is like learning Latin, at first. But as development devices increase in capability and design cycles shrink, it isn't hard to see that this style of development WILL come to your local embedded project.

 

iPhone demonstrates the future of embedded: The iPhone is a powerful computing platform masquerading as an embedded device. But it is the apps available that show the real future of embedded, where every little (or large) device is providing data and controls to the "cloud" and an app on a device like the iPhone (or iWatch) can access or control it from any corner of the globe (even when a sphere cannot have corners). By working on the app side of the iPhone the embedded developer can gain new insight into how his/her product might need to play in the future.

 

Reason 4 - $$$: Of course, flirting with iPhone (or iOS now that there are 3 different devices with iPhone, iPad and iPod Touch) development just might lead to a profitable extra-curricular activity: there are 100 Million iOS devices out there and already 250,000 apps in the App store.

]]>
Mon, 30 Aug 2010 21:38:42 -0600
I missed my fifteen seconds of fame :) http://www.cypress.com/?rID=45772  I left for my Hawaiian vacation on August 20, the same day I would have received this email newletter from Embedded.com:

 

8-19-10: Ganssle vs Pearson on requirements / Crenshaw on mediocrity / PID Basics

 

That "Pearson" was/is me, and refers to the the back-and-forth between my article "I don't need no stinkin' requirements" (www.eetimes.com/design/embedded/4205794/I-don-t-need-no-stinkin--requirements-) and Jack Ganssle's column "I desperately need stinkin' requirements" (www.eetimes.com/discussion/other/4206193/I-Desperately-Need-Stinkin-Requirements) written in response to mine. While the titles look in conflict, they really are dealing with different issues. Mine is about approaching your design with the expectation that changes will happen. Jack's is about the need for real good requirements. But since Jack turned the spotlight on the requirements, that's where the focus went.


It is always fun to get a newsletter in your inbox that announces your new article (sometimes this is how I learn it was finally published), but this was even cooler, I had created a controversy (actually the editor who changed the title helped) and brought out not just one but two responses from a legend in the (admittedly nerdy) embedded developer world (more about Jack Ganssle here: http://www.ganssle.com/bio.htm).


I posted to my blog last week (www.cypress.com/) where I presented my definition of requirements that don't stink, which need two things: 

1) unambiguously identified requirement statements, which I expect to be tagged with "shall" exclusively, and 

2) requirement-statements that are implementable AND testable. 


Jack has also followed up (I don't think he reads my blog, but whatever) where he posted the "rules for requirements" from a talk by (apparent Seattleite) Steve Tockey (www.construx.com/Page.aspx). His three rules (presented in Jack's latest column www.eetimes.com/discussion/other/4206402/More-on-requirements): 

1) A requirement is a statement about the system that is unambiguous,

2) A requirement is binding (product requires it, customer will pay for it, product is unacceptable without it), and

3) A requirement is testable.


I believe Steve's/Jack's #2 is obvious - a requirements document contains what the product "requires" and the process of turning the requirements into a project plan will often separate the wheat from the chaff - since every requirement increases the cost and schedule and risk for the product. This statement it a bit gratuitous and either aimed at new-college-grads or hackers who have joined a "real" software/embedded project. That is why I left it out.


The other two agree with my two - within all of the words in a requirements document, the team needs to be able to separate nice-to-know info from must-do requirements; and then they need to agree the statement can be implemented AND tested. I think Jack underemphasized the implementation part. From my marketing seat, I have often struggled with developers who thought (usually warranted) that I was requiring a specific implementation (and I might have been unnecessarily) OR was asking for something impossible (usually NOT true, and from time to time I had to roll-up my engineering sleeves and prove it).


So, what are you gonna do? Understand the requirements are important and writing good ones is not easy but it is essential and you must strive for great requirements.


Hang loose (I have one more week left :)

]]>
Sat, 28 Aug 2010 20:50:37 -0600
In Hawaii with no requirements http://www.cypress.com/?rID=45716 Time-off, vacations, holidays, these all give one a chance to rest the brain and let in other interesting ideas. Or activities. I learned many years ago, that a vacation, a real vacation begins with as few requirements as possible. So what I typically do is:


1) find a nice accommodating resort with condos or villas - room to stretch out and most importantly a kitchen and refrigerator

2) plan to do as much as possible at the resort (usually I choose one situated on a beach with either interesting surf or a reef for exploring)

3) buy what I need in as large a quantity as possible - fewer trips and supplies at hand


I have found several places I really like to return to, and every year try to consider one place new, but I also realize this will put more strain on my relaxation. 


This trip we are on Kaanapali beach on the island of Maui. Not the super-developed south Kaanapali beach but the north side, north of the Sheraton and black rock. We have about 1 mile of beach stretching either direction from the resort, great for walking and a great colorful-fish-filled reef. 


It only takes a simple set of swimming goggles to explore the reef - every year it seems to be easier to float in this salt water, for some reason. My daughter is loving it, she is a real fish, and Mom (not mine, hers) gets to walk the sands while we are out on the reef - that is when she is tired of the water.


Sunsets have been pretty good (still trying to see that green flash as the sun disappears) and the water is comfortable, temperature and surf-wise. Although my daughter is looking for a little more surf to put her boogie board through its paces.


So maybe not "no requirements" since we do have to eat and my wife won't just eat something cause she's hungry, it also has to really taste good. But with some supplies and a fridge, at least there are drinks, snacks and ice cream around.


Hang loose!

]]>
Thu, 26 Aug 2010 21:02:19 -0600
The best book for learning to program iPhone http://www.cypress.com/?rID=45736 Vacation allows for activities normal work-a-day life prevents. When you are a programming nerd that means...learning to program something new, of course. What better platform to learn than the super-popular ultra-cool iPhone?

Yes, there is a tropical paradise outside, beach, palm trees, reef, pool, etc. And make no mistake, I am out there everyday, not just looking at it from the living room window or balcony (we do have a great view from the balcony, though). I am in the water everyday with my daughter and we all start every morning with a nice walk up and down the beach. But, a nerd's gotta do what a nerd's gotta do, and programming nerds program.

I started with iPhone late last year, spending a good bit of Thanksgiving shutdown reading Apple's guides and watching the Stanford lectures (itunes.apple.com/WebObjects/MZStore.woa/wa/viewPodcast). That was a good start, and with a Mac and the free SDK, it is easy to get started. There is lots of documentation provided by Apple, both primers and references, and it all gets linked into XCode, the Apple IDE. I got a little more dabbling in during Christmas break, but I was stalling, I needed a different approach, so I went looking for books.

Just before my first summer vacation (June) I got my hands on Sams Teach Yourself iPhone Application Development in 24 Hours (www.amazon.com/Teach-Yourself-iPhone-Application-Development/dp/0672330849/ref=sr_1_1) and during that vacation I got through about half the book (12 "hours" or lessons). It was the best I had seen from a step-by-step primer, but unfortunately I had to return it (late, very late) to the library. Yep, I'm too cheap to buy my own copy. but at that point I would have recommended it as the best.

Just before this (2nd) summer vacation I checked out a book I now think is the best iPhone programming book for beginners (beginners with iPhone programming, but not beginning programmers). I am now halfway through this book, and the flow is perfect, building real apps from the start (one of the first projects is a Twitter client, not a "Hello World" simple display project) and systematically building and expanding a rich, multi-featured app (the main project is a bartenders friend or DrinkMixer).

The best book? Head First iPhone Development: A Learner's Guide to Creating Objective-C Applications for the iPhone (www.amazon.com/Head-First-iPhone-Development-Applications/dp/0596803540/ref=sr_1_1).

I'll talk more next post about why I think this is the best book and why an embedded programmer can benefit from learning to program iPhone.

Hang loose!
]]>
Thu, 26 Aug 2010 20:58:45 -0600
Why do our requirements stink? http://www.cypress.com/?rID=45668 My recent article "I don't need no stinkin' requirements" (www.eetimes.com/
design/embedded/ 4205794/I-don-t-need-no-stinkin--requirements-
) presented a design method to deal with the common experiece of changing requirements. I still hold that requirements change and you need to be prepared for it, but esteemed embedded expert Jack Ganssle has posted a counter-point article "I desparately need stinkin' requirements" (www.eetimes.com/discussion/other/4206193/I-Desperately-Need-Stinkin-Requirements) where he points out the problem is not changes in "requirements" for the product (that is, how the product needs to perform) but that the written set of requirements stink. And rather than blunder ahead with bad requirements, we need to demand (and work for) well-developed complete requirements. Bravo Jack, this is the crux of the problem, fix the requirements and they won't change and there is less disruption.

We need to write good requirements. And everyone around us has an idea of what those requirements should be. And how easy they are to write. Here is a classic interchange between Dilbert and his boss illustrating the problem.

Dilbert.comp>

It is possible, however, to write a good requirement. I adamently believe it starts with an agreed nomenclature. I stick to what in my experience seems de facto industry standard - the word "shall". Any statement that is a requirement will contain a "shall". Any statement without a "shall" is not a requirement - it may be information and it may be useful, but it is not a requirement (and therefore is not going to be done).

Next, is the statement implementable and testable as written. Is it specific enough to have a test that shows it has been achieved (positive test), and more importantly, are there test cases that verify (when they fail) that the requirement isn't satisfied. Using Dilbert above, we could start with asking how many clients and how many servers must be supported, and perhaps the result is: The system shall support 17 servers and 2100 clients simultaneously. You could concieve of a test to verify that, though in reality the cost of running the test case, or simply acquiring the necessary hardware, might preclude running the test (at least in 1994 when the comic first appeared), but at least there is enough information.

What if a "shall" statement is not implementable and testable? Then break it down. Ask "What does this mean?" or more specifically "Does this mean xxx?" until you get down to statements that are implementable and testable. But avoid requiring an implementation unless it is absolutely a "requirement". For instance, in the underlined requirement above, Dilbert might ask "Does this mean we can use Commodore 64 computers for all the clients?".

As Jack also points out, there are great books on writing requirements (his favorite is Karl Wiegers’ “Software Requirements.”) but not a single university course. So most of us get our degrees from the college of "hard knocks and lost weekends".

 

]]>
Thu, 19 Aug 2010 15:53:05 -0600
Who needs (final) requirements? http://www.cypress.com/?rID=45601  I wrote an article last fall based upon a cartoon I saw over 20 years ago:

http://www.abberley.co.uk/asap/images/You_lot_start_coding.gif

You lot start coding...I'll go and see what they want.

 

Embedded Systems Design has just published my article with the title: I don't need no stinkin' requirements! Unfortunately, it was missing this great cartoon that spurred the whole idea (and has stayed with me for over 20 years).

Is it really true? Well if you read the article you will see that I propose a design that doesn't need FINAL requirements and accommodates late changes, tries to minimize the impact of these changes.

Truly, we engineers (even though my card says marketing, my boss will confirm, sadly for him, that I am still very much an engineer) work frequently, and sometimes gladly, from few or ambiguous requirements, and our managers or project leaders are more concerned about that than we are. Because we quickly and constantly are designing in our heads; we write our own requirements on imaginary mental paper. But that's not all bad, as I hope my article supports.

Have a read, a few visitors have left comments on embedded.com with their own views, and it seems I was on the money. Share your thoughts, either at embedded.com or with one of the links below.

 

I read your article and think...

Now you read mine.

]]>
Sat, 14 Aug 2010 13:43:27 -0600
The next blog subject is . . . Ladybugs http://www.cypress.com/?rID=45366 Last post I asked for you, the esteemed readers of my blog, to tell me what  "method" I should use to pick the topic of this posting. The choices I gave were either (1) suggest a topic (16%), or (2) choose a random "hot topic" (16%), or (3) write whatever my daughter suggested (50%). The obvious winner was my daughter, and the loser is me, because now I have to write about ladybugs.

 

Ladybugs?? So here's the story. My daughter really likes ladybugs, she even collects them in a way. Rather than capturing and imprisoning them in a jar, she "collects" the ones she finds onto specific bushes and then monitors and plays with them. Playing with them consists of getting them to crawl on her finger, hand, arm, and then transferring them from one stem of a bush to another. She can usually see the bush she "collects" them on from the kitchen window, so instead of looking at them in a glass jar, the ladybugs see her looking at them from inside a "jar" - the house.

 

This same girl also plays with butterflys, dragonflys and potato bugs - but NO spiders, no matter how (microscopically) small they are. It is not uncommon to hear a shriek only to find her pointing at a spider with a 1mm body and 2mm legs begging for someone to eradicate it. And then she reminds us that there is a spider that is almost 1 foot long (http://news.nationalgeographic.com/news/2006/10/061027-tarantula-video.html). That's not a spider, that's a foot! (I tried that same joke when we realized the pizzas we had last night were 12" in diameter - here's where we were having great pizza: http://www.tuttabellapizza.com/).

 

So what's this have to do with embedded design methodology? I'm glad you asked. When you are in the de-BUG-ging mode, it is very seductive to try to deal with the nicer, cuter, more friendly bugs (ladybugs) and try and avoid tackling the big, tough nasty bugs (like a goliath birdeater tarantula with 1 inch fangs). But the truth is, it takes the same approaches and tools to deal with easy or hard bugs.

 

Is there a disadvantage to leaving the worst bugs until last? Absolutely, it is the goliath birdeaters that break your schedule and make the marketing guys cry. Even when the issue is very rare and statistically not important (like a famous Pentium bug was) these days issues are measured in PPM (parts-per-million, where the numbers are expected to be low single digits), not percentage. And with twitter and other social communication methods, any single disgruntled user who encounters your "failure" can raise the issue to general knowledge/widespread panic (imagine going to bed after determining your "bug" only presents itself 1 time in a million and waking up to an interview on the "Today" with a kid who demonstrates how to force any of the 10 million widgets sold to go from smartphone to paperweight in seconds).

 

It is good to find, collect and quarantine the nice easy bugs as fast as possible, just don't become too fond of the ladybugs and ignore (until it's too late) the huge-fanged tarantulas. The knowledge gained in root-causing and correcting that elusive bug can be pro-actively applied to the next projects to prevent these types of problems. And raise confidence in your schedules (especially the things you schedule for your free time).

 

BTW, anyone doing the math might ask what the other 18% of reply-cants (people who replied, duh) wanted this topic to be? Well, of the 6(!!) replies I got, one person didn't read the operating instructions and simply gave me very good feedback but no topic. And due to a rare occurrence of a minor bug in my calculations, one divided by 6 equals 18% :). And I was soooooooo glad I didn't have to use Hot Trends and dice - out of 20 topics, the only one remotely interesting was Gorilla Glass (http://www.corning.com/gorillaglass/index.aspx). 

Sound off and let me know what you think:

 

 
]]>
Mon, 02 Aug 2010 17:09:31 -0600
My blog where I suck up to the readers and solicit your feedback http://www.cypress.com/?rID=44999 I like my blog, I enjoy working on it and, under the right circumstances, ideas flow like blood from a shaving nick. But the real world generally imposes its ugly mug with things like work deadlines, and so my blog stales (I wondered about using "stale" as a verb, so I looked it up, and prefer definition #7  www.definitions.net/definition/stale). 

And then I saw this comic (which is covered by the Creative Commons Attribution-NonCommercial 2.5 License) and have to agree that the readers of a blog are the most important characteristic of a successful blog (who are, in the case of this blog, I must say, are very intelligent, well-read, handsome and look like they have been working out).

 

Good content updated often (see first paragraph) is essential, but what is written, often and well, also needs to appeal to the readers.

So, dear readers, here is your call to action: tell me what YOU want to see discussed in my NEXT blog. Use the following numbered suggestions with Easy-Click(TM) feedback/email links:

1) Write next blog based on Google Hot Searches (USA) selected by rolling 4 dice. If you select this option, sometime between July 30 and August 1 I will go to www.google.com/trends/hottrends armed with four regulation dice. After shaking these four dice and getting a number between 4 and 24, I will subtract 4 and the search number matching the result will be the theme/topic/seed for my next blog. 

2) Write next blog based on my daughter's suggestion. If you select this option, sometime between July 30 and August 1 I will ask my 10-year-old daughter what subject to write about. I will use the following language, "Think carefully before you answer because it's really important. What topic do you want your father to write about in his next blog post." After the answer is given, I will ask a variant of "Is that your final answer" and after that I can only ask for clarifications, not lobby for a change of mind. 

3) Write next blog based on the most popular suggested topic. If you select this option, I will take the most popular suggested topic (where I have the liberty of paraphrasing slightly similar suggestions to narrow the choices) and write my next blog around this topic.

So it's up to you. Multiple votes must indicate which is to be the "final" vote or the first received from an address will be used. The more votes the more interesting this will be. And you can expect a future blog describing this whole experience, which (since this is Cypress) will very likely include an analysis of numbers.

Happy work-week 30!

]]>
Sat, 24 Jul 2010 12:59:13 -0600
Bad publicity is better than no publicity http://www.cypress.com/?rID=43341  

The title states the common wisdom, which is that being known for anything is better than being an unknown. This is a long-held aphorism, but does it really hold in this hyper-connected post-internet age, where a seemingly innocuous app on your cellphone can scour the digital airwaves for any speck of dirt on any topic, or a less innocuous app may be sprinkling those specks.

 

As an example, I saw a lecture in which the city Seattle, Washington was the subject of study. The lecturer presented how often Seattle appeared in the national news and when it did how the city was portrayed and even referenced. Why Seattle? This was nearly one year after the WTO riots in Seattle. (If you missed it, there is a movie, of course, called "Battle in Seattle". I watched it, it has entertainment value though I cannot vouch for the accuracy. During those days I was just miles away at the UW studying and had a number of classmates volunteering for the conference. The gig didn't pay enough for me, so I passed.) The lecturer showed how that before WTO, Seattle was very infrequently mentioned and when it was it was qualified as "Seattle, Washington". Post WTO, not only did Seattle continue to appear more frequently in the national press, but it was now elevated to similar status as Madonna and Prince as only referred to as Seattle. While I live in the area and may be biased, I don't feel that WTO has in any way cast a permanent pall on the reputation of the city.

 

 

So how about for you and me, lowly designers, coders, testers and promoters of embedded solutions? Does this hold for us? Is there really "good" bad publicity? Well unlike "the Donald" or Miley Cyrus or Kathie Lee Gifford, when you or I get bad publicity it will not be from an off-color remark or bad hair day, it will be from some mistake, budget overrun or missed schedule milestone. Can this type of bad publicity be turned good? ABSOLUTELY

 

 

Being put in the public eye is a chance to show what you are made of and what you can do, but more often then not it becomes a chance to publicly strip the skin, nails and hair off of some poor in the guise of quality improvement. The quality improvement goal is usually truly felt and desired, unfortunately a deep seated desire to make an example often fuels an often near-medieval turn of events. There are three steps you need to follow if you ever find yourself in this position and one key preparation step.

 

 

Preparation: get honest, first with yourself, but soon with the world. This means isolating the "thing" or mistake from the players and motivations. This is the time to be brutally impartial. What is the perception of what happened? What really happened (some companies have processes to get the heart of this question, the gist is to ask probing "why" questions until the true heart of the matter is revealed, the true root cause of the failure). If there are standard processes or common practices involved, were they followed? Are there true "unique" situations involved, if so, what? More than likely, a set of factors have combined to cause the event, and the key in this step is to really step up and identify them, obviously so they can be resolved and corrected, but just as importantly, so they can be prevented in the future.

 

 

Now what? Here is how to capitalize on bad publicity. First step, realize you are now under scrutiny and act as if your mom, spouse, and child are all watching everything you do - you need to spend extra effort obviously doing all the right things rather than excusing, explaining or rationalizing past behavior. NOTE: this is before admitting ANY "wrongdoing". 

 

 

Remember the "That was easy" campaign for Staples? In an interview after the success of the campaign had been proved, the VP of marketing at Staples said they worked on that campaign for a long time, not because the ads were hard, but because she insisted that Staples had actually become easy before launching the ads.  The ads became a statement of how Staples had changed and not an empty tagline only  to be replaced in a few quarters with something cuter like "We're just more fun". 

 

 

So that leads to the second step, publicize your new, better, higher quality, etc. status. Because you changed in step 1 (and knew how to change from the analysis in the preparation step) now its time to tell the world. This is showing what you learned and how much better you are now. Serial entrepreneurs do this all the time, as one startup implodes they launch the next, full of  the lessons that will lead to the next success. In fact, to a venture capitalist failure and an understanding of what happened is as valuable or more so than a success, since luck and market timing often produce success .

 

 

Remember the financial scandals that led to the famous Micheal Milken going to prison, and the publicity on lack of ethics in corporation and finance? Now Micheal Milken has his own think tank and several philanthropic ventures. Mistakes can often be very expensive, but learning why it happened AND incorporating that learning into behavior is truly valuable. And valued. Hackers become security consultants. Criminals become fraud consultants. Knowing what can go wrong and teaching how to avoid the same thing is valuable.

 

So now the final step, incorporate your "learning" into your day-to-day life, This may mean becoming a consultant and starting a whole new career, or it may mean updating company business processes with this learning. Whichever route you take, this is what takes you past the event with bad publicity and into a new future with a higher profile. Consider any one-hit wonder in the recording industry. Once they drop out of the spotlight, they are forgotten. You can do this too, and move on after a mistake that gets bad publicity, but it is much better to seize the spotlight, provide a new you for the spotlight to shine on, and keep the guy running the spotlight want to shine it one you. 

]]>
Fri, 25 Jun 2010 14:07:08 -0600
No one else has complained http://www.cypress.com/?rID=44103 This week I am on vacation, but you would have thought I was on vacation the last month if you were monitoring my blog. Truth is, when your day job(s) pushes back, something has to give. But I'll keep trying harder. ;-)

As I said, I am on vacation and these days a key amenity at any hotel, whether for business or pleasure, is internet access and most notably WiFi (keeping those iPads, iPhones, iPods and computers connected). So first thing I do is check WiFi availability, and find I have none. :( Then I check for wired access and find there is no cable, but the hotel services guide says there is wired access. So I go to the front desk to ask "What's the deal?" and try to get this sorted. 

First I ask about WiFI, and find out it from the woman behind the desk that it should be working. I say it isn't and that I even walked down the hall through several buildings also finding no service. :-( :-(

This is where the "classic" customer service response comes in. The woman at the desk says scoffingly, "That can't be, no one else has complained." Customer is always right? So now I have to convince her I know what I am doing, and just to back it up I mention that another guest a few buildings away earlier told me he also was having trouble (which he solved by purchasing an Apple Airport Express). She says, "I'll contact the IT person," as an imaginary black hole of response opens in my mind. In the end she gives me an ethernet cable ("Are you suer there isn't one by the phone?") and I am gone.

My point with this diatribe is two fold: 1) You cannot wait for a customer to complain (or several to be sure there is a real problem) in order to find your "problems" or bugs; and 2) If and when a customer does complain, take it seriously. For the developer this means thinking about the customer use cases, not just the designed-for cases but the weirdo cases as well, the so called "corner" cases. Some of the best testing, as far as uncovering nasty-good bugs, has come from undirected or "ad hoc" testing, but this is probably the case because in these uncontrolled circumstances one is more comfortable performing the "weirdo cases (or corner cases, if you insist).

If I applied this to the hotel WiFi system, I (as hotel general manager) would set up a procedure where at least once a day someone would walk through the buildings or the grounds where WiFi is expected and verify it is there (easy with a WiFi-enabled phone, just start streaming a YouTube video and go walk around). I am sure there are more comprehensive (read expensive) sensor systems that can be employed, but who could reject a procedure that only requires an individual to walk around.

Oh, and just to be fair, about a day and a half later WiFi miraculously appeared in my room. Happy surfing 8-)

 

]]>
Fri, 25 Jun 2010 14:05:36 -0600
Twelve-bar blues, embedded style http://www.cypress.com/?rID=39407 The most popular chord progression in popular music (http://en.wikipedia.org/wiki/Twelve-bar_blues), 12-bar blues defines a pattern that is easy to learn, very flexible in it's application, and, with just a little ear-training, easily recognizable.

The basic form of 12-bar blues uses only 3 chords: I (the tonic), IV (the subdominant) and V (the dominant). In the key of C that means the three chords are C(I), F(IV) and G(V). The 12-bars in their basic progression have the chords in this order (each bar has 4 beats): 

C | C | C | C | 

F | F | C | C | 

G | F | C | C | 

Easy? You can repeat this all night, and that's what happens many places around the world on any given Saturday night.

So what does this have to do with embedded design and PSoC in general? Both employ strong, repetitive patterns; when you understand the pattern you can follow and alter it and thrive; when you fail to learn and understand it, you and your design can be knocked off your feet by a strong wind.

So what's the pattern for general embedded design? Input-Control-Output (I-C-O). An embedded system will have input(s) to read, output(s) to set and control logic that ties the input status to the (new) output state. All this can be done serially, often the case in a slow-changing system, or it can be pipelined and parallel, or each of the three parts can be asynchronous to the others. 

Understand this pattern, and when you are thrown into a legacy design, you have a basis to begin exploring: What are the inputs and outputs? Find them and the code associated with them, look at what's left and it should be the control. Like the blues, the basic pattern has variations and repetitions, so don't just treat it like a hammer and wait for nails to throw themselves at you. That's why everyone doesn't do this stuff, right?

What about PSoC? It's repetitive pattern is at a lower-level and helps you build the inputs and output portions of your design. In PSoC, the components and user modules follow a consistent pattern of APIs (at least the basic ones) so you can get the lower-level things right more easily (always call "Start", for example) and leave more time to worry about the big problems.

But here is where embracing the I-C-O pattern is important - there is so much you can do with the bits and blocks of the PSoC, you can easily weave a spider's web of input and outputs where your control gets stuck and you can't find the end of one input string or the beginning of another. Keep a clear eye on how your ADC, PGA, DAC, PWM, DMA, USB, I2C, etc user modules fit into the inputs and the outputs (document it). If you let them "take care of themselves" you can get a nasty web of chain reactions and side effects. And once you're in a web, struggling can be futile, physically or with your scope and debugger - you've got to get out and take a look, find your way back to the pattern. And maybe do some serious cutting and chopping to even see what's there.

OK, for extra credit, what's the PSoC blue's scale?

]]>
Sat, 15 May 2010 17:08:17 -0600
Christmas gifts and everyday giveaways http://www.cypress.com/?rID=39948 It is last few days of the Christmas shopping season and I have a 9-year to shop for. Uniquely or not, my daughter does not consume massive quantities of advertising, her television is regulated and dosed-out primarily through videos from the library (no, not just geeky and educational stuff, she also gets the video versions of cartoons like "Tom and Jerry", but sans ads). Due to this lack of commercial media, Adrienna does not have a long list of Christmas "needs"  and has not barraged us with a gigantic Christmas list. BTW, she knows about Santa and "lists" but also knows there is no Santa Claus (sorry Mark). I asked her if she remembers believing in Santa Claus and her reply was that she remembers pretending to believe in Santa. Here is a gratuitous child picture.

So what's the point of this post? The hardest part of buying a good gift for Adrienna is that she gets what she needs (and wants) pretty much all the time (but her wants are restrained). This makes getting a useful/worthwhile gift for Christmas hard. She has asked for a computer of her own, but when asked doesn't really have a need; she uses the Mac Mini in the den when she needs it, both to play games and do schoolwork - right now that includes building her first powerpoint presentation (that is for school, not fun and games, duh!) So because she gets "gifts" all the time, there is a good chance that Christmas is relatively anti-climatic.

So again, what's the point? I'm just now getting to it - giving "special" things away everyday negates their impact. Generally in your work-life and projects specifically, if you are working every weekend, this becomes "regular" instead of special. Humans tend to appreciate regular events much less than special events. This is true with prejudice in the workplace. When the death march begins, those poor souls who have already worked every weekend are drafted just like everyone else. That's an important lesson for the individual, but there is also a lesson for managers: if your reports are working every weekend, don't expect to be able to squeeze out the extra push when you need it.

So my "gift" to all of us (me and you) is this admonition: Spend your project time well and reserve "your" time for yourself - motivate yourself to fill the designated work hours as full as you can and use the off-hours for recharging - so you can blast out of the gates again on Monday. Save your "gifts" and present them sparingly, for those "special" events. And since I am still a little stuck with my shopping, your ideas appreciated.

Merry Christmas and Happy New Year! Enjoy the holiday season! Rest, recharge, recreate, and see you next year!

 

]]>
Sat, 15 May 2010 17:06:56 -0600
Time is on my side? It can be if you plan ahead http://www.cypress.com/?rID=40570 Last week I was in Taiwan. Taking a trip from the US to the Far East always intrigues me, especially because of the way time gets messed about. This time I left late on Saturday (actually Sunday morning, but only barely) and arrived in time for an all-day meeting Monday morning. And at the end of the week, I left late (again) on Saturday and arrived back in Seattle in time for Saturday dinner, 5 hours before I left. And through the entire week I felt like a big clock was always hanging over my head - and now I have to readjust to Pacific Standard Time.

In embedded designs, you can similarly feel like you are under old man time's thumb (sorry for mixing Rolling Stones' allusions) - first the project schedule is tight and then there are likely time-related specifications to meet like response time and refresh time (these are common touchscreen specifications, the difference between these is that response time refers to acting upon a finger and usually includes any low-power timing, while refresh time refers to how fast new data is available in active scanning mode). These two time issues can be related in that ignoring the specifications can cause additional schedule pressure when late in the design cycle the system has to be "optimized" to try and meet the time specs. There is an additional "time" issue in embedded designs, and that is how to deal with the "Watchdog Timer".

So now let's see how all three of these are related and how to keep from feeling thrice beaten.

1) Consider time and timing at the outset. This means designing to meet timing specifications as well as planning/scheduling calendar time to improve or optimize the timing.

2) Keep track of the current time spec performance. 

3) Start the design with a watchdog rather than trying to force one in later. 

The best place to service a watchdog timer is the start of the main/foreground loop. The tricky part about finding exactly where to service it is that the point of the watchdog is to catch a catastrophic failure (the proverbial "lost in weeds" situation) and therefore must be serviced in a location that won't be executed once the CPU gets lost. In an interrupt-driven system, that means finding a non-interrupt driven function in which to service the watchdog. So if the loop that executes periodically and produces the applications data is triggered by a timer interrupt, is this still the right place to service the watchdog? Yes, but a watchdog alone may not provide complete protection for your system - you may need to consider additional tests such as a stack-level check to indicate/ensure application health.

What should the timing of the watchdog be? It is funny, but so many places in embedded design the number "3" shows up (like in debouncing a switch or action, majority voting, the number of times you can make the same mistake before getting fired) and again "3" is appropriate for watchdog timing: look at the standard loop timing when the watchdog will be serviced and set the timeout to about 3 times the expected timing. Why 3? Any system can have it's timing intermittently perturbed and that alone should not trigger the watchdog. This also helps to alleviate the main error in watchdog timing: servicing the watchdog in multiple places because "normal conditions" cause an occasional timeout. I have seen many examples where under certain circumstances a long interrupt routine causes a watchdog to trip during development.

Common solution: service the watchdog in the interrupt under these conditions. Bad solution. 

Better solution: fix the interrupt length and the watchdog timeout so that all normal conditions can be met without tripping the watchdog.

So what about the schedule and refresh timing issues? First, build in benchmarking support for the timing specifications (typically this means bit twiddling) and regularly benchmark this timing throughout development. This will help keep the goals in mind and raise warning flags early on (the specs might be too tight or require more extraordinary measures to meet them). No one want to be blindsided, but with ample warning, expectations can likely be managed (for instance, alpha might promise timing within 2x of  the timing spec, beta at 1.5x and by first official release timing is "met", either original or through spec relief negotiations).

And remember: the better you manage "time" and meet expectations this time, the harder the specs will be next time :)

 

]]>
Sat, 15 May 2010 17:06:16 -0600
Hobgoblins and Small Minds http://www.cypress.com/?rID=42693 There is a famous misquote, which is "Consistency is the hobgoblin of small minds". This is a misstatement of Ralph Waldo Emerson's statement: "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines." I am not going to pretend I fully understand Emerson's point, but I will emphatically state that he DOES NOT mean that consistency is bad, but "foolish" or inappropriate consistency is. Another writer, Oscar Wilde had a different take: "Consistency is the last refuge of the unimaginative." When it comes to a design, "A foolish imagination is the last stop of a soon-to-be unemployed designer" (original quote of Jon D. Pearson).

Consistency versus imagination - the two are neither at odds nor opposite ends of the spectrum. Take for instance how a button works. If you were to look at a pushbutton on a device, could you tell its function by sight alone (all you can see is a button)? Probably not. Now, if I let you you play with the button and view what happens, and then asked you what its function is, do you think you could tell me? Only if it behaves consistently. Now, what if it is a really low-quality button, the contacts are very bouncy, the button has a lot of play in it and the device it is connected to is not debouncing the switch? As long as the function is consistent (when pressed the light turns on and when released it turns off, or pressing toggles the light on and pressing when light is on toggles light to off) you would have no problem to "specify" the behavior or to use this switch (although you may have to practice patience). 

Now, imagine the switch is very high quality and the device is sensing it very precisely, but there are hundreds of combinations of current status (button pressure, time-pressed, orientation of button) and previous state (time in state, pressure/time/orientation when entered or exited). How long would it take to exhaust all possibilities and experience them frequently enough to "learn" all the behavioral nuances to be able to specify the switch "behavior"? Answer is somewhere between now and eternity (or until you give up and go do something else).

Now try one more thought experiment: imagine you are creating a new home entertainment device, and the only switch and behavior you could make use of was a momentary pushbutton. Why? Because that is what you used before and therefore will always use, exclusively. Could you design a product with this constraint? Sure. Would it be satisfying enough to sell well? Depends upon the device and use-case complexity, but likely this artificial constraint would hamper the design and therefore the customer satisfaction and in turn sales.

So which is the right way? My suggestion is to always consider the user - if a user is better served by an imaginative re-definition, then go for it. If users are very comfortable with doing things a certain way, take great care in changing this behavior (and even greater care if improper use could be dangerous). But in every case, provide consistency at least in the local sense of being understandable and repeatable. Humans are very good at learning and training themselves to adapt to a poor design that always acts the same way. They have much less patience for an elegant design they cannot remember or understand how to use.

 

BTW, in preparation for this article I learned a hobgoblin is not always been a bad thing either, just like consistency. See http://en.wikipedia.org/wiki/Hobgoblin if you are interested.

 

]]>
Sat, 15 May 2010 17:05:54 -0600
Mom always said, don't play ball in the house (and other good coding guidelines) http://www.cypress.com/?rID=43232 Mother's day has come and past again, and yet again I have unfinished work to do. Since my mom does not get to Cypress blogs very often, I can tell you the details. I have a silver locket for her, and of course that would not be enough, since lockets are meant to hold pictures, and as it is my gift to my mother it would seem that I should include a picture of me. But there are two sides to the locket and what goes in the other side? Another picture of me? Perhaps, but unexciting. (What's worse than a locket with my picture in it? A locket with 2 pictures of me.) Perhaps a picture of her son and her granddaughter. Not a bad idea, so now get on it. Thus the unfinished business. (I did call here to tell her it was coming.)

But that got me to thinking about the role mothers have on our behavior. Mothers always have sayings, directions, commands that they use ad infinitum (although we kids can hear it 100s of  times and still not do it). Things like: wait a half hour before swimming, rinse the dishes before putting them in the dishwasher, don't use my good scissors for that, no rough-housing in the living room (I can't imagine anyone but my mom saying "rough-housing").

Then there is the world famous "don't play ball in the house". As a father, I find myself only sometimes telling these things to my daughter (but I do hear her mother say things like that) and I still find myself after all these years breaking these rules. For instance, I will kick a soccer ball around with my daughter in the family room where there are vases, flower arrangements, glass snowglobes, etc. well within striking distance. I usually only say the famous line after my daughter or I has just swiped the ball past the vase for a near miss. Even then I may only say "keep the ball on the floor, no air".

So while I do not live by the letter of these "rules" drilled into me by my mother, they definitely affect my behavior and come to mind when something goes wrong. Much like a good set of coding guidelines. They encourage a safe and comfortable way of life, and but we need to be reminded of them. When the list of rules and their particulars gets long and involved, the behavior influencing becomes muted - how can 45 or more line items truly be internalized. Coding checklists very often are this way, long and detailed and rarely support the flow of a review through the program. Coding checklists are meant to be used to check what has already been done, not to direct ongoing behavior. Coding guidelines on the other hand are meant to direct good behavior, and reinforce it, so that when your buddy reviews your code the checklist helps him pick up the things you (and he) missed. Checklists are important, but they do not teach good behavior, they catch bad behavior.

Unless the checklist is actionable and integrated into the everyday workflow. Before every flight, a pilot will pull out his checklist and go over all the same things he has hundreds of times before, knowing that doing so may save his life. If he didn't have the checklist but instead did his best and then asked someone else (manager or wife) to "check him out" for the flight, there is a good a chance someone might miss something, unless he or she uses a checklist. Is it possible to get our coding checklists to this level, where they help us in day-to-day workflow? Or to make our code review checklists flow efficiently like a pilot's preflight checklist? If we treat it that way, yes it can. And if the checklist is not efficient and does not flow like a pilots, then the coding guidelines must be much stronger and more behavior influencing.

So next year when I am shopping for a great mother's day gift, a new checklist item for me before buying it is whether it is ready to ship or if additional "assembly" will be required. And on my new guidelines for indoor soccer with Adrienna I add don't play ball in the house when mom is home. And keep some good glue handy.

]]>
Wed, 12 May 2010 20:34:56 -0600
It's the "Use Case", dummy (Or, it ain't just what ya got, it's how ya use it) http://www.cypress.com/?rID=40307 After a (too) long break, I am back at my blog-post. This one comes from the floor of the world's largest mobile (devices, software, infrastructure, tools) conference, the "World Mobile Congress" in Barcelona. First, in case you think this is a boondoggle, we had snow flakes when we arrived, rain yesterday and 4 days of 10-hour booth duty. But yes, it is Barcelona, Spain.


There have been some big announcements, including Microsoft Phone Series 7, an all-new smartphone OS (at least that is how it looks), not just "shiny" but looks to improve the mobile experience - THE use case. Surrounded by all this gadgetry it is easy to get jaded and wonder "why or how would I use that". But that's exactly the point, the technology is to serve the user, while the reality is often the opposite. But sometimes you just need a guide to help you understand the benefit and application. Which is why we (Cypress, TrueTouch and Westbridge at least) have a booth at this show, demonstrating our 10-finger all points multi-touch solutions and our new Hover and 1mm Stylus support.


Take 10-finger tracking for instance - Why on Earth do you need to track ten fingers on a 3.2" screen? What possible use is that? (Why do you need a car that goes 100 MPH when the speed limit is 75?). The key to "using" 10-finger position tracking is to make the few fingers you really want to track robust. So while tracking one finger, if the user is gripping the phone tightly, the other points caused by the grip can be detected and ignored by the application. And of course there are other uses for more than 1 finger, such as gestures and (what else) games that haven't yet been written (games are great for pushing the envelope on technology, like the military did in days gone by).


A shiny technology looking for a use case is fine, as long as it doesn't go looking too far for too long. The spec wars will continue to ensue no matter what, but when you are in the middle of them, it is key to reach out and embrace the use cases and make them essential to the users (or patiently wait for the users to tell you the cases they want to apply the technology). In reality, customers/users will find new ways to use your technology, but you can help point them in the right direction. And if they think they thought of it, even better. And if I see any other cool things in Barcelona, I'll let you know.

]]>
Tue, 16 Feb 2010 11:52:29 -0600
Common feature of all products: "We have no common features" http://www.cypress.com/?rID=39761 Just wrapping up a whirlwind trip through Asia (Seoul, Shanghai, Tokyo): airport-hotel-office-hotel-office-airport was the pattern, only broken in Tokyo because the first "office" day was Friday. During this time I was talking to developers about the advantages of a common framework for their PSoC designs, where the tedious, low-value aspects of projects are put into the common framework, and each project's value-added features are plugged into or hung onto the framework. Everyone grasped immediately the value of this approach (time saving, continuous quality improvement) but almost universally agreed on one thing: we need to be able to change everything, including the common framework. Why? Every customer, every marketer wants everything their way.

Is this really true? Before I answer, I'll give you a clue. Using absolute words like "always", "every", and "never" almost always leads to a false statement; all you need is one exception. So I do not believe the demands of customers or marketers precludes developing and using a common framework for PSoC projects. What I do believe is that every developer prefers to do things their own way. They will endorse code reuse when the reused code is their own and resist reuse of another's code. And for good reason if code is not designed for reuse. But a common framework by definition is designed for reuse. Another thing every developer prefers: not spending time to debug his/her own stupid coding mistakes, which is the most important reason to use a common (existing) framework.

Frameworks come in many shapes and sizes: C language is a framework as is the Google Web Toolkit. As you can see, a framework can be extremely general or very special purpose. All frameworks aim to reduce the development time and increase the quality of your code by providing reusable, qualified software components. This is the very nature of PSoC development: User Modules and APIs, boot and configuration code generation. Every PSoC developer is making good use of this framework without knowing it. The next step is to create a more application-specific framework, where you define the "Frozen Spots" that you don't make changes to and the "Hot Spots" where you do (see http://en.wikipedia.org/wiki/Software_framework for a general article on software frameworks).

A framework is not meant to limit features, but to provides the structure upon which you add the features and build up your project. Will a customer know you are using a software framework? Possibly, since you may choose to standardize on some aspects of the interface (for instance a communications register map). But a good framework needs to be designed to allow for exceptions, and therefore even the interface may be changed while the framework is employed. In object-oriented terminology, this is called overloading. We can borrow the term but with the languages and compilers used with PSoC there is no built-in support for overloading. BTW, the best object-oriented project I ever worked on was 20 years ago and used PL/M as the implementation language, which simply proves that being object-oriented is a state-of-mind (see http://en.wikipedia.org/wiki/PL/M if you really want to know a little more about PL/M).

What's the next step? Consider your applications, is there a common structure that could lend itself to a framework? (Note: answer is always yes, but the question is to what degree). I'll discuss how to build-up a framework in a future post.

]]>
Sun, 13 Dec 2009 21:09:26 -0600
It's the first post so... http://www.cypress.com/?rID=39294 I guess introductions are in order. I am Jon Pearson, and my PSoC experience harkens all the way back to a device the majority of the thousands of PSoC users probably don't even know existed (CY8C26xxx). While it's hard to "see" a chip tape out from a marketing office, that's what happened two weeks after I began in September 2000. Two months earlier my wife's "patience" with my midlife full-time drop back into university life ended and I was at a job fair talking to a start-up that was going to create a user-definable microcontroller. I thought the idea was up my alley after 13 years slinging you-name-it code into yet another controller for God-knows-which client (contracting, a story for another time). I was looking for a marketing/definition role and PSoC was just coming together. Perfect timing. I drifted toward my strengths and over the rest of the "noughties" (no kidding, that's what we're supposed to call the years 2000-2009, I googled it) I became more and more involved in the design tools side of PSoC, letting other folks define the bits and bytes of the silicon. And as "touch" is now a huge part of Cypress and PSoC, I  weaseled my way into that field, and now I'm involved with touchscreens, working to shape how customers can use our tools better to make designs in PSoC faster and easier. 

The point of this blog, I hope, will be to share and compare design methods and fads, and keep abreast of how we can make things work best, keep marketing and engineering management both happy with better and faster stuff delivered on-time. Because methods can help keep the madness at bay. I will steal voraciously from anything good I see and post it (with attributions of course). I expect I can share a good story or two along the way to help drive the points home.

First one comes to mind upon reading a recent Jack Ganssle posting on Embedded.com called Software for dependable systems, a discussion of a new book of the same title. I like Jack's writing, because he's been around, seen a lot, and isn't easily swayed. This article reminded me of a call I had to go work for an implantable medical device company (about 15 years ago). They were interested in my avionics experience, especially the part that dealt with verification. Turns out the FDA had finally realized there was a computer program in the pacemakers and didn't know how to "qualify" it. The company wanted to generate a ton of verification paperwork fast (including products delivered many years back), and they knew that the FAA and DoD followed methods that were good at that. The more they explained the job the happier I was that I didn't get it.

This article from Jack appealed to me because I see this struggle all the time with PSoC customers with smaller projects. When more than 1 guy is involved, and there isn't a plan to deal with this (or a method to follow) things starts to spiral out of control as the schedule deadline looms. Coming from avionics background, I know the "methods" applied (such as DO-178 and MilStd1553) and when used well they can increase quality. But many times that increase in quality comes with such a high price that unless the government is footing the bill, it won't happen. Huge barrier to adoption.

Any method, no matter how good, is only useful when those applying it know what's in it for them, how it will help. And, in a nod to the "Agile" folks, Jack points out that the book authors agree those applying the methods actually need to be capable of doing so. The article even has a brief discussion of how C is dangerous and a suggestion for something called SPARK, an Ada derivative (more in the book I'm sure). Having used Ada and C/C++ and a couple other language varieties in between for embedded programming, I think Jack (channeling the book authors) makes good points. But for most of us, the list of programming language choices for our projects can be counted on our thumbs, and don't include anything very exotic.

Take a look, especially if your software "quality" experience is less formal, and you may get a few ideas.

See ya soon!

]]>
Fri, 11 Dec 2009 02:43:16 -0600
Get a (garage) job http://www.cypress.com/?rID=39519 Back and fresh from our (US) nationally sanctioned overeating holiday, digging through reams (or the electronic equivalent) of missed correspondences, it might not be a good time to suggest taking on another "job". Especially one that is likely NOT employer-funded and might not ever pan out. But that is exactly what I am suggesting, no, imploring you to do. If you don't have one already (and some folks already have several) - get a "Garage Job".

What's a Garage Job?

For the answer check out this recent article with the subtitle: Engineer's stealth design leads to new business gambit. The formal title was less informative, other than pointing out the employer: Novel Sensor propels HP into sensor networks. An engineer at HP slipped one of his pet projects into a buddy's mask set containing "sanctioned" projects, and, over the course of 6 years, discovered a  new, better, lucrative accelerometer design, by essentially "running the design backwards".

HP is well known as a company that emerged from the garages of Bill Hewlett and Dave Packard, and that mindset continues to be encouraged. Other companies also encourage their employees to explore areas not currently covered by their current assignments, some even pay them for it. 3M Post-It notes came from one of these (it was the combination of a failing current project to find a waterproof adhesive and a scientist-engineer coloring outside the lines). 3M continues to officially express a policy of employees spending 20% of their time on outside projects. Google does too, but something tells me that spending 20% of your 200% work week isn't too big of a concession on the company's part. 

More important than how it is supported are the benefits of a Garage Job, both to employee and employer. Since your manager may also read this, let's cover the employer's gains first - 1) possibly a new entry into a multi-billion dollar market (an outside chance), 2) a happier, more satisfied employee (because they are doing something they control and interests them), 3) lessons-learned that can be applied to future projects, and, the most intangible benefit, 4) a smarter, more self-motivated, independent thinker (also potentially more dangerous and less manageable, depending on management style imposed). All-in-all, a great deal for (almost) every manager and employer.

So what is in it for the employee? 

A Garage Job is a chance to color outside the lines, follow a path you might have discovered during a sanctioned project and not had time to explore. This can lead you to: 1) a great new product (although these tend to be evasive when chased, as stated in the Tao of Steve), 2) a better implementation or algorithm than typically employed (since there is time to explore many forks in the road when a deadline isn't looming), 3) a better understanding of why the current methods are reliable (again, it's the forks), and/or 4) general increase in knowledge. In short, this is classic engineering-problem-solving-muscle training - finding new problems to solve or new ways to solve the same old problems. 

Now that you know why to get a Garage Job, what about ideas on how to get started or what to do?

The key is to start with what you like (or love). In the case of using PSoC© as a part of your Garage Job, much of the "how" is easy and inexpensive - you have an enormous mixed-signal toolbox to work with. To get ideas for "what" look around you. Is there a product you own/use with an accessible design that you can replace part of with a PSoC-enhanced implementation? Maybe replacing a bunch of 20th century pushbuttons with 21st century CapSense buttons and/or proximity sensors. Can you augment something you find or use with a wireless interface? This might mean sensing an analog or digital signal, converting it to packets of data for CyFi™ and back to analog on the receiving end (as they say, all that's inside PSoC, even the user module to connect the CyFi radios). The result might end up in your garage or your kitchen, depending on how nice it looks when finished.

There are some days, like when you are slogging through the clean-up of a project, that a different, more creative outlet can put new life in those hours. Looking forward to your Garage Job definitely helps. Sharing the results (both positive and negative) is good for you and others, including your employer. And if you publish your results (choose the most lucrative outlet you can) everyone benefits including your professional standing.

]]>
Fri, 11 Dec 2009 02:42:53 -0600