Be who you are and say what you feel, because those who mind don't matter and those who matter don't mind. - Dr. Seuss - US author & illustrator (1904 - 1991)Do you matter? Despite it being a pedestrian principle, our patience with the differences we have with others is a good indicator.
August 2006 Archives
So a new website has launched that intends to use the "wisdom of crowds" to identify the best ideas for a new "dream" Mac application. The registered users get to vote on all the ideas to narrow down the field, then judges will decided on 3 winning ideas. The net result will be 3 new Mac applications that will prove the, ahem, wisdom (or lack thereof) of those participating. I think it will flop. I hope it's wildly successful.
Great ideas are NOT the limiting factor in software development! It's so much more about execution! Further, the driving forces behind great execution are much more multifaceted than a simple paper specification about how some application should behave. My favorite part of this process is:
Customer: Please build X.
Developer: Here is X!
Customer: I know what I want, and X isn't it.
It's everything that happens after this conversation that's important, not before. I liked what Daniel Jalkut says:
This contest will do nothing except put a heavy burden on a small development team to turn somebody else's ideas into the type of application that can usually only be inspired by the developer's own dreams. Even when a team pursues a dream - their dream - success is far from assured.
[Michael] Bierut felt he started off being too "clever" - which I hear as new for new's sake - and that wasn't the right thing to do. I don't think innovative means shocking, obviously new, different, and all that. I think innovation can be invisible and brilliant and seamless to adapt to, with that whiff of exhaled "ooh!" that happens afterwards. To a graphic designer, the word may mean something else.
My work focuses on applying the teachings of two management science gurus, one is Eliyahu Goldratt and other one is W. Edwards Deming. Goldratt has something he calls the Theory of Constraints which basically says if you can identify the bottle neck in your process your should focus all your management attention, all your investment dollars in alleviating that bottle neck in what ever fashion is appropriate. And he has some guidance on how to do that. Initially that started in manufacturing, and then he had a solution for project management, there was one for distribution channels and so on. I took all this theory and figured out how to apply it to software engineering. Some people said, "Well, that can't possibly work!" and, um, it does!
I think if you can reduce the team size down to something small that makes a big difference. If you can put four guys in one room and they can talk to each other all the time in really high fidelity that makes a big difference. But there's a certain scale of things where that doesn't work.Speaking of domain specific languages as applied to the quality process:
It's a really cool use of domain specific languages in the Team Architect product and it's a fantastic way to actually have quality insurance in your quality insurance group!
Productivity goes up when you focus people on their production rate and don't force them on this conformance to plan concept. And If that's one thing you could change to make a big difference is stop estimating and start measuring velocity.
If I walk through the building at 7:30 at night and the half a dozen people there all have the debugger open and they've been debugging for the last 8 hours, that's a fairly good indication to me that there's a problem.
Good management, good organizational engineering makes for happy well balanced people that have a life and that's what I think we're all looking for optimally.He has a blog at Agile Management Blog and his book is named Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results. I just added it to my list of books to read. P.S. If you want to view this on the Mac, download the free Windows Media QuickTime plugin Flip4Mac.
But that does not mean we are unaware of the pain this will cause to a significant number of users. If you think we are not aware of that pain, consider this. David Weiss can give you a better number on this, but our testing methodology has always made extensive use of scripts for automated tests. Not just a couple of scripts, but thousands of scripts per Office application. At one point, all of those scripts were written in VB/VBA. In order to carry that testing effort forward into the era of Universal Binaries, every single one of those scripts had to be rewritten in AppleScript. I don't think it's even a remote exaggeration to say that our use of VB/VBA was at least a couple orders of magnitude greater than even our most automated customers. Do we know your pain? You bet we do.This is absolutely true, we feel the pain. When we removed VB, immediately we lost more than half of our automated test bed. We've been carefully building back up our automation in ways that make sense, but it's a huge loss that those scripts no longer worked. That said, we know that AppleScript is capable and something we can depend upon long term not only for our testing efforts, but also for workflow customizations that our pro customers will use and build upon. Just last night I was at a WWDC party and I was introduced to a developer who had written a very cool script in VB to automate some of the work he and his team does on a regular basis. He was obviously concerned about what loosing VB would mean to him and his team with the next version of Mac Office. Everything he was doing with VB could be done in AppleScript and when we explained how AppleScript Studio provided a real IDE and UI workshop for developing custom solutions, he was very excited. If you have custom VB scripts, may I suggest looking into AppleScript as an alternative solution? There are great resources on building AppleScript automated workflows and it really is the Mac standard for inter-application communication. We are always looking for ways to make our AppleScript support better, so if in trying to make your VB solution work in AppleScript, you run up against a wall, let me know either in comments or via email. I think you'll be surprised at how much can be done with AppleScript today.
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. PeterIt's often so easy to criticize without context.
Note: The following essay applies to software developed for an upgrades-based business model. While it may apply to other software business models, I make no attempt to defend that assertion. Also, see my disclaimer if you think this is more than just my personal opinion of the world.
In the beginning, you're a small team, maybe 10 developers and 10 testers. (Okay, this is a huge team, but I'm trying to make the numbers easy, work with me!) You ship your version 1.0 and it's a huge success and you make loads of money! You also get lots of feedback on what could be better. So, you go to work on version 2.0. Some of what you need to make version 2.0 great is more developers. You hire one or two more developers and one or two more testers, but there is a problem: the testers must test everything in version 1.0 plus all the new stuff scheduled for 2.0. You've hired exceptional testers, and they dig in and with long hours, they are able to test sufficiently, barely, and you ship 2.0. Everyone attends the big ship party! Whoo! After a while, you look at your finances, and realize that while 2.0 was much better than 1.0, there's still a lot that could be better and your customers make that very clear. So the market for your product is not saturated which means there's still lots of upside. On top of that, your sales team informs you that by just adding feature X along with feature Y you can expand your potential market by at least double! Enthused by the success of your product you move on to version 3.0 and it's about at this point that you begin to sense some nervousness from your test team. Testers always seem to be a hyper-critical bunch, it is their job you know, so you brush off that antsy feeling, excited by your increasingly successful product. By about version 12.0 you realize what the testers were all nervous about:
Note: The numbers are fake, but the problem is real.
Let's say 1 developer produces 1,000 lines of code each product cycle. Can you see a pattern here?
The number of testers you have working on the product must either increase with your code base or you're doomed to shipping a product of lower quality, eventually. Also, incase you're wondering, increasing your testers in proportion to you code base has some pretty negative financial implications as does pushing out your ship date to make room for more testing.
If that were not enough, there's some pretty solid evidence that that even the original 10 testers were insufficient for the initial 1.0 product code base, let alone the scaled and additional load that has grown over time! Exhaustive, comprehensive testing is simply impossible. So while in version 1.0 the testers had to make intelligent priority judgments about what to test, in version 12.0 the testers have to basically divine the future if they have any hope of getting to the critical bugs manually!
It really is this hard.
The professional testers I know have an enormous challenge at hand and must be so absolutely decisive about where they spend their time, it amazes me. Sometimes new testers, or those unfamiliar with software development, will simply think, "Hey, all I have to do is find all the bugs!" and they'd be wrong. What they have to do is find every important bug and verify that every critical code path is working. (And loads of other stuff, but that's for another essay...) You see, at the core of professional software testing, there is a built in, super advanced, internal priority system. Great testers seem to have an efficacy gene that allows them to explore the areas that are most important and identify the worst bugs. This is a skill and an art and the world could use more great software testers.
The Automation Pill
Given all the forgoing, it is absolutely incredible to me, that once a tester is given the chance to write code to automate some of their work, somehow, for some reason, this priority system goes into stealth mode. I don't know why this is. Perhaps it's the part of the brain that you use to write code messes with the part that compares the relative costs of automation. Perhaps it's the dream, "If only I could automate all the 1.0 feature testing, at least in 2.0 I could focus just on the new, fun stuff." It could be, and sometimes is, a manager, long lost touch with what core testing is all about, and now looking for ways to "drive efficiency and reduce costs" asks for something as silly as 100% automation. It could be a million things, but this is for sure: Test Automation is a fantastic tool that I believe can help with the problem mentioned above, but like most tools, when it's miss-used, it hurts.
When to Automate?
So when do you automate your tests? I don't know, for sure, but I do have some questions you might consider when making the decision. All of these questions have a common theme, and it is this: What is my return on investment for automating this test?
When you invest time to write an automated test, you implicitly lose time you could be using to find and file bugs. That lost time will only pay off over time if your test continues to be valid for certain number of test runs. The best way I've heard this described is "script death."
When you write the script for the first time, you give it life. It lives, as long as you don't need to modify the script in any way, and the results of the test continue to be valid. If the script has a bug in it, or the product under test changes, or a new OS version changes an assumption, or a new CPU comes to town and causes your test to become invalid, your script had died and you need to re-examine if you are going to invest the time to fix it or not.
Update: I had forgotten where I had read this concept, but it was years ago. Thanks to a link, I found Bruce McLeod's weblog and a link to Brian Marick's 1998 article on this very subject of When Should a Test Be Automated?. It's a great read, I highly recommend it. This was one of the first articles I read on test automation back when we were just starting our Mac automation system.
Since the pay back in test automation always comes back as the script is run, will it be run many times? Automating to ensure no regressions in a critical area, like testing that a security hole is plugged, or testing a core area, like file open, and file save would be great candidates for automation because the cost of a regression in these areas is very high, and they'll be run with each new daily build.
Some tests might not be core or critical, but the testing involved is mind numbing and easy to mess up. A good example of this might be, say, opening 1,000 user documents and making sure you app doesn't crash. :-) Tedious testing makes unhappy testers, and a happy tester is a productive tester.
For the automation to be worthwhile it must verify something! I've seen far too many glorified crash tests marked as automation. If you don't have verification in your automation code, then the only time it's going to fail is when you encounter a crash, a good thing to be sure, but far short of the scripts testing potential. When writing your script, you'll need to consider what methods you'll use to get data back from the system for verification. Find or make APIs that you can use (AppleScript can be very useful for this, hint, hint.) Screen shot verification is fraught with difficulty. Avoid it if you can.
One thing that will cause script death just about faster than anything else, is the product changing. This is why scripting to the API, (You do have an API right?) is so much better than scripting the UI. Typically, the UI changes much more frequently than the API ever will. Either way, consider if the feature is new and undergoing lots of change. If so, avoid automating your tests around the feature until it has settled down. (This can mean toward the end of the product cycle, which is when you are busiest looking for those show stopper bugs.) Just know, that if your test script doesn't get much value this product cycle, it will in the next provided the feature doesn't change.
Automate around things that can be verified from a dynamic oracle. All verification will need some kind of code that says, "I'm expecting X did I get it?" If you are encoding in your test script the definition of "success" how sure are you that what is "correct" will not change? If you are not very sure, move to another area for automated testing.
If it's easy to automate something and the probability is low that things will change, go for it. A good example of this is automating your setup and install testing. These kinds of tests are going to be done over and over again and most installers have some kind of script-ability built in.
As you near the end of your project cycle, the chances that your automation will earn back it's investment in the current project cycle diminish. At the beginning, things are too turbulent. The best time to write automation is about the middle of the cycle, when things are mostly stable, but there are still lots of builds left to test.
Some scripts are easy to write and the verification easy to setup for your English builds, but once you localize your project "script death" becomes rampant. Keep this in mind. If you are writing a script, how hard will it be to localize the script when the time comes? Can you write it "OneWorld" from the beginning? If it is almost certain your script will die on the localized builds, don't plan on using automation to augment your localization testing with out significant work on your test scripts.
Investigation is by far the most time intensive part of test automation. Write your scripts so they are atomic or very specific in what they test. Don't write test scripts that run for 30 minutes, unless that's the explicit purpose of the script. You don't want to be running a script for 30 minutes just to repro the failure that occurs in the 29th minute of the test execution. Write your scripts so they are super easy to read and so that the logs "yell" what the test is and how it is failing. Your automation harness will play an integral role in how easy it is for you to investigate your automation failures.
Small, atomic scripts run great in parallel!
There is often a hope that automation can some how magically babysit a feature just like a human tester running through a test plan. This is simply not true. A human can see so much more of what is going on and pattern match a thousand different things simultaneously. An atomic automated script will have its blinders on and be fully focused on verifying only what you specified when you wrote it. Don't under-estimate how stupid automated tests can be.
In closing, automated testing is not a silver bullet that is going to solve all the problems of testing and software development. It is a valuable tool that you'd be silly not to employ in managing the complexity of software testing. I believe James Bach said it best:
"I love test automation, but I rarely approach it by looking at manual tests and asking myself "how can I make the computer do that?" Instead, I ask myself how I can use tools to augment and improve the human testing activity. I also consider what things the computers can do without humans around, but again, that is not automating good manual tests, it is creating something new."
I've got some:
I'd not skip the keynote. ;-) And get there early. Ever since I scored a new mouse for showing up, I'm expecting some kind of keynote give away. Don't do that. It's perpetual disappointment.
I enjoy the Apple Design awards, but I'm less inclined toward the Stump the Expert session.
Take notes about the questions and answers, they are often the most interesting part of the presentations, but not included in the recordings online.
There is a WebObjects talk that is given by a guy (sorry, I don't know his name) but he always creates a song to explain what he's teaching at the end of his session. Sometimes two songs! They are just awesomely funny. I always try to make it to this session. And once again, it's not on the DVDs/online version you can view afterward.
The wireless network will be sketchy and most probably will not work during the keynote. Just know that in advance. Seems like it gets better as the week goes on.
Walk the floor both in the hallways and in the presentation rooms to scout out where the power outlets are. Often there are outlets in the floor, if you bring a small power-strip, you become fast friends will everyone around you.
Trying to schedule your sessions in advance is simply a lost cause. The keynote changes everything. You'll have lunch and time to think on the first day to plan the rest of the week. Don't be afraid to hop sessions, if one session is lame, jump to another one.
The feedback sessions are always fun, but by far the most vitriolic is the Aqua feedback session. I go just to enjoy the banter. :-)
Food goes fast. Get into the rhythm of when the food is set out and try to be there for the first drop. I almost always got the tail end of the Jamba Juices distribution which always meant, I got nothing.
Take some time to walk in the Labs and talk to the engineers. They are friendly and always have interesting things to say.
Oh, ya, there are lots of parties. Enjoy 'em.
The plug-fest is fun, just to see the new hardware devices people bring in to try out.
The Apple campus bash is fun, especially if you've never been there before. Unless you are on the first bus there, don't expect to get into the Apple Store on the Apple campus without a long wait in line.
What are your tips for attending WWDC?
Total global demand for software will grow by an order of magnitude over the next decade, driven by new forces in the global economy like the growing role of software in social infrastructure, by new application types like business integration and medical informatics, and by new platform technologies like web services, mobile devices and smart appliances. Without comparable increases in productivity, total software development capacity seems destined to fall far short of total demand by the end of the decade. What will change to provide the massive increase in capacity required to meet demand? It is not likely to come from adding developers. Instead, software development methods and practices will have to change dramatically to make developers much more productive.Read the whole article here.
Martin Fowler has a whole essay on this topic: Language Workbenches: The Killer-App for Domain Specific Languages?
Most new ideas in software developments are really new variations on old ideas. This article describes one of these, the growing idea of a class of tools that I call Language Workbenches - examples of which include Intentional Software, JetBrains's Meta Programming System, and Microsoft's Software Factories. These tools take an old style of development - which I call language oriented programming and use IDE tooling in a bid to make language oriented programming a viable approach. Although I'm not enough of a prognosticator to say whether they will succeed in their ambition, I do think that these tools are some of the most interesting things on the horizon of software development. Interesting enough to write this essay to try to explain, at least in outline, how they work and the main issues around their future usefulness.And to round out the discussion Neil Davidson responds to Steve Cook with this insightful comment:
As much as I find the technical side interesting, the thing which really fascinates me is how the way people write software will change in the future. ... I've been thinking about it a bit, and I think that although the analogy with the changes in industrial manufacturing (from craftsman to mass production to mass customization) is interesting, I'm not sure it really holds true.Does Apple see any of this? Do they think it all too distant to consider currently? Either way, what an interesting time to be developing software!
I think one of the key things you mentioned was the analogy to the supply chain, and how this chain will lengthen. The way I see it, software will always need a craftsman at one end. Software is intrinsically hard to do, and requires people, or teams of people, to think very carefully and deeply about what they're doing. The tools, processes and components they use will have to change though - it is at this point in the supply chain that I can see mass production happening. I think the analogy is that you're always going to need craftsmen like carpenters and bricklayers to build a house, but the tools, techniques and materials they use will be mass produced. At the moment we're at the stage where the bricklayer makes his own bricks, and the carpenter cuts his own trees down. I don't think the craftsman will be replaced, but the tools he uses will be provided by companies who provide (or are) software factories. That's the point I was trying to make about a few companies dominating the market - somebody will discover that they can produce an e-commerce software factory and sell hundreds of thousands of the things at $1,000 a piece rather than two or three a year at $100,000 a go (because it's no longer a problem constrained by people's time). Presumably Microsoft believes it will be them and that's possibly part of the reason why they're entering the CRM, accounting and business markets.
German designers Oliver Keller and Tillman Schlootz presented their extremely extreme personal tank concept for the 2006 Michelin Design Challenge, showcasing vehicles made especially for California's diverse and often rugged topography. Hyanide's tread contains 77 (holla!) identical plastic-covered segments of Kevlar rubber held together by Kevlar rope. Each segment flexes independently, allowing fluid multi-directional movement suitable for any conditions including deep mud, sand and snow.No more bent axles or broken crank shafts for me!
I remember the first time I saw this Sony commercial, I was blown away. Check out the high quality QuickTime movie here. Simply amazing. This is the kind of creative advertising that inspires, entertains and sells. Well, it looks like they are at it again!
What happens when you strap 70,000 liters of paint to the sides of some old buildings and explode the colored paint everywhere? I guess we'll find out.