January 6, 2009 - 16:24, by Dostalek, Kevin
While the trends of last year in the Corporate Portal and Enterprise Content Management areas continue, many are predicting that Business Intelligence will be 2009's hot business system push. This makes sense, especially in these tough economic times as companies are finding it even more important to make good decisions (because you may not get as many chances). Certainly my company
is seeing an upswing in BI related projects and with many of the new toolsets available today, these projects are finally within the financial reach of small and medium sized businesses.
However, the main problem that we often run into is that a BI solution is always limited by the quality of data that is behind it, and many of these mid-sized companies just don't have the data infrastructure to produce what they need without spending a lot of money on that first. In these companies much of the business insight is still locked within their employee's heads, or at least require their expert interpretations to make reasonable decisions based upon them. That's well and good for the data analysts, but in this web 2.0/enterprise 2.0 world we live in, we want to engage ALL possible intellectual capital to solve our problems.
I just want to remind everyone that there is indeed a way of tapping into this immense goldmine of crowd wisdom by using an Enterprise Prediction Market. These are by no means new, but as with the afore-mentioned technologies it's only been in the last couple years that they have become relatively inexpensive enough for the small-medium sized businesses. This is a great way of producing real crowd-sourced insight into business intelligence for decision making. It also has a great side effect of engaging your employees (or extended-enterprise, or customers) and giving them a voice (and maybe even having a little fun!)
The way they work is simple. You present a "market" or "stock" (or in some cases a "bet") and let your user group openly trade (buy/sell) these based on their expert knowledge and strategic intuition (more on SI in a future post). The market prices rise and fall based on the laws of supply and demand which allow you to develop an insight picture based on a diverse group of opinions. There have been many studies done that show crowds almost always produce more accurate predictions that individuals or small groups. The chart below shows just how close to perfect one particular prediction market did when looking across the board at all "markets" (questions) they had posed.
The main reason I'm so high on prediction markets right now is I see them providing the following benefits:
They leverage what you already have (intellectual capital) - no data scrubbing required :)
They are an entre into more sophisticated employee engagement systems such as idea markets. These can be difficult to implement if your culture is not geared up for them, but predication markets give the employee base and the decision makers a taste of the participation culture, while still maintaining some control.
I believe gaming and game theory is the next HUGE wave in business, and this type of system certainly has that flavor as well.
So where to start? Well, if you want a jump off point, there is a great post (and in fact an entire blog) that I recommend: A (long) review of prediction market software. If you just want to jump in and see one of these in action I highly recommend taking a look at Inkling. I really like Inkling because they have done a great job at making the whole "trading" thing much simpler (by using plain English instead of stock market terms, using a Market Maker so you don't have to worry about volume, and having a very slick streamlined interface). You can participate in a market almost immediately from their site for free (or create your own pilot market, free for 45 days).
December 10, 2008 - 11:45, by Dostalek, Kevin
I just wanted to dump some ideas out there surrounding one of the trickiest pieces of using agile development methodologies in a consulting / outsourced environment: Agile Contracts. Much of the musings below come from other sources with my own thoughts mixed in, but one source I'd like to specifically call out is the PDC 2008 session I attended given by Mary Poppendieck
and Grigori Melnik
The Problem with Two Party Interactions
So the first thing to look at is why we even need contracts in the first place. The conventional wisdom is as follows:
- Companies inevitably look out for their own interests
- Contracts are needed to limit opportunistic behavior
What Mary points out though is that really at the core the problem is that there potentially exists conflicts of interest which drive the paranoia of opportunistic behavior. In an ideal setting though we:
- Assume other party will act in good faith (so this requires a level of trust)
- Let the relationship limit opportunism (again, requires trust, but also some basis for the relationship)
- Use contracts instead to do these things:
- Align the best interests of each party with the best interests of the joint venture
- Eliminate conflicts of interest
I'll come back around to how this type of relationship and contract are formed up in a moment, but first lets look at how the two types of traditional contracts fall short of meeting the ideals above.
Problems with Fixed Price Contracts
- Supplier is at greatest risk
- Customer has little incentive to accept the work as complete
- Generally does not give the lowest cost
- Competent suppliers will include cost of risk in the bid
- Creates the game of low bid with expensive change orders (which blows a hole in the primary reason CFO's like fixed-bids, which is budget predictability)
- Generally does not give the lowest risk
- Selection favors the most optimistic (desperate) supplier
- Least likely to understand project's complexity
- Most likely to need financial rescue
- Most likely to abandon the contract
- Customers are least likely to get what they really want.
Remember, that the "protection" that a fixed-bid contract seemingly provides (if we don't like it, we don't have to pay for it) is all illusory. This is because the value that the project is projected to provide is greater than the price of the effort otherwise the project would not move forward. If the effort is "not-accepted", then the vendor is out the costs of the resources employed on the project. However, the customer is out both the costs of their resources involved in the project as well as the anticipated return of the project (which we already said is greater than even the PRICE, much less the vendor's actual COST). So who is the biggest loser here? Obviously depending on the relative sizes of the vendor and customer it may still "hurt" the vendor more, but clearly the customer has more at stake, and so I would contend that fixed-bid "protection" is a fabrication.
Problems with Time and Materials Contracts
While there are many useful scenarios where T&M projects make sense (staff augmentation, etc...) in general there are quite a few problems with them in an outsourced project model as well:
- Customer is at greatest risk
- Supplier has little incentive to complete the work
- Therefore we believe we need to control supplier opportunism
- ENTER: Project Control Processes
- Detailed oversight generally provided by the least knowledgeable party
- Supplier must justify every action
- LIKELY LEADS TO:
- Increased costs
- Artifact creation that does not add direct business value
- An assumption that the original plan is the optimal plan (the one created at a time of lowest knowledge/information about the project)
Candidate Solution: Target Cost Contracts
Circling back to our idealistic world now that we know some of the problems with traditional contracts, let's look at a different kind of contract. How can we build a contract that has the following properties?
- Target Cost defined and includes all changes
- Target is the joint responsibility of both parties
- Target cost is clearly communicated to workers
- Negotiations occur if target cost is exceeded (or projected to)
- Neither party should benefit under this scenario (it's a failure scenario)
- Primary goal of contract is to remove conflict of interest.
In order for such a contract to work there are a few assumptions that probably need to pre-exist:
- We have some basis for relationship and trust.
- This means we may have to start off with a small project using a traditional contract.
- We are probably using an agile development methodology that utilizes fixed-time, fixed-budget, and prioritized variable-scope mechanisms (backlogs,etc...)
The structure of this contract includes the following:
- An unbrella or framework contract with the legal stuff in it.
- Establishment of a target cost
- Work themes defined in stages (prioritized)
- Stages should be small to limit risk for both parties and to provide everyone with frequent points to revisit the value-proposition of the relationship
- Scope beyond the current stage remains fluid and negotiable
- Contact should describe the relationship, not the deliverables
- Contract should set up a framework for future agreements
- Contract should clearly define a means for mediation if no agreement can be reached. (this is important!)
So what I've found in my almost 15 years in this industry, is that our best customers always seem to end up in this type of contract model anyway (after perhaps a few projects using a traditional contract). But wouldn't it be better if we could actually LEAD into a relationship with this idea in mind and use it as a means to better define our value-proposition and distinguish ourselves from competitors? (or at the very minimum, get to this model sooner than later so that everyone can be more productive).
Those are my thoughts, please share yours!
November 24, 2008 - 21:13, by Dostalek, Kevin
From Typemock and Roy Osherove earlier today:
Typemock are offering their new product for unit testing SharePoint
called Isolator For SharePoint, for a special introduction price. it is the only tool that allows you to unit test SharePoint
without a SharePoint server. To learn more click here
The first 50 bloggers who blog this text in their blog and tell us about it, will get a Full Isolator license, Free. for rules and info click here.
November 17, 2008 - 21:05, by Dostalek, Kevin
First, let me say that I'm speaking anecdotally here. I'm not a doctor, nor do I have any medical training. But recent experience has shown me that most medical practices and procedures are by and large an agile process.
Right now, both of my children are sick with some sort of virus. At least now they are calling it viral, even though they started them both on Anti-Biotics this past weekend for other reasons (possible strep and swollen glands). But that's probably for the best, since my daughter has now developed bronchitis on top of that, and so the AB's should help. Personally, I can't imagine how'd I'd be faring after days of 102+ fevers. While my daughter is showing signs of improvement, my son is on the downward slope, trailing his sister--- so the "prediction" at this point is "he has whatever she had, so we can probably expect about the same course". It certainly seems like there's a lot of guessing going on here.
My point in all this is not to complain about the level of medical services being provided, but rather to illustrate that in most cases, the medical methodology is simply to look at the symptoms, make an initial diagnosis based on available information (which is never complete), TRY/DO SOMETHING, then collect feedback in the changing environment, revise your plan, and repeat.
Does that sound familiar? Kind of sounds like agile development to me. Or let's put it another way. Can you imagine demanding a fixed-bid quote from your doctor before treatment? Maybe you could give them a written down RFP of what you think your symptoms are-- or if they are lucky you might even let them do a quick examination (but only if they insist, and only if they will do it for free).
Would you really want to farm out your health to the lowest bidder?
Think of the change-order bueracracy that would have to ensue, as you lay dying on an OR table (no wait a minute, anethesia is not in this contract- and it's not actually required)... you can survive fine without you pinky finger, so don't try to tell me that's a critical defect!
No, all of that is silly. I'm not saying there isn't real negligence that happens in the medical profession as evidenced by malpractice lawsuits- but implicit with any procedure is a "barring any complications, this will cost about $xxxx" type of "contract". And remember.. medicine is probably more of a science than writing software! How or why do we put up with this? Because the one absolute is we TRUST our doctors.
We trust them to have our best interests at heart. We trust them to take whatever action will get us a little closer to better ("done") given the current circumstances. We trust them not to take risks with our health without first talking it over with us, making sure we understand, and ultimately in the end letting US choose what procedures will be implemented.
Why can't software development be more like this? That is, built on trust, working in partnership, having comon goals, and always dealing with current realities to do what's best NOW. We can. Mostly, what gets in the way are contracts that are written to either allow the customer to take advantage (fixed bid) or the vendor to take advantage (time and materials). I'll write more about the alternative contract types later this week, but right now I want to focus on the vision.
We can make things better. All we need are a few brave people- vendors, customers, and leaders to insist that we do.
October 1, 2008 - 03:15, by Dostalek, Kevin
This year has seen a huge advancement in the technology available to development teams for the automation of unit tests. This has been primarily driven by an upswing in organizational adoption of Agile development methodologies which by and large demand the use of automated test suites to insure high levels of quality in the face of change. Test Driven Development (TDD) has taken this to the next level as it actually requires the tests be written prior to the code. This article explores many of the traditional challenges surrounding automated unit testing and highlights some of the new tools available to overcome them.
What is an Automated Unit Test?
An automated unit test is simply a piece of code that exercises another piece of code in an attempt to assert that it is working as expected. The benefits of testing in this manner include:
• Providing a regression test library so that upon any change to the code-base (and certainly upon the checking-in of new code) you can prove that the changes have not “broken” any other previously expected behavior.
• Allow for much quicker integration testing because the build server (continuously integrating) can run the test suites automatically.
• Provide a documentation source that provides the elusive answer to the question: “what was the developer intent”.
• In the case of TDD, unit tests provide a structured way to attack complex problems in smaller (presumably easier) increments.
What do I test?
What to test is a highly controversial topic and there are many lines of delineation that can be drawn (only public methods, complete code coverage, etc.) The direction I generally give my developers is to write tests for anything that has a reasonable likelihood of breaking if we change something. By running code-coverage reports in Visual Studio 2008 you can get a feel for what areas of the code are being exercised or not and write additional tests if you find potential trouble spots that aren’t being covered. Testing private methods has traditionally been difficult without resorting to “tricks” that often are more complicated than the code under testing. However, new IDE’s such as Visual Studio 2008 allow you to simply pick those methods from a wizard and it produces a “Private Accessor” class automatically that encapsulates the reflection magic, effectively abstracting away much of the complexity.
How do I write variant tests?
Writing tests to test all possible runtime conditions is very time consuming and sometimes not practical. However, many testing frameworks including nUnit, MSTest (Visual Studio), and MBUnit, support the writing of data driven tests in which you can provide a list of variables from a data source and have the test run multiple times with different sets of variables. A new tool from Microsoft Research called Pex takes this one step further by providing automated exploratory testing. The way this works is the developer writes a parameterized unit test and then Pex performs static analysis and runtime monitoring on both the parameterized unit test as well as the code being exercised to generate a suite of standard unit tests with very high code coverage.
What about dependencies?
Often we run into testability issues because our code relies on dependencies and it becomes tricky to isolate the code under test. The recent increase in adoption of architectural patterns such as MVC (see ASP.Net MVC, MonoRail, ProMesh, etc.) and MVP (see WCSF) help minimize this problem by the separation of concerns and the use of dependency injection containers (see Unity, ObjectBuilder, Castle Windsor, Spring.Net, etc.). Since they all allow dependencies to be injected at runtime you can inject the real dependencies in production and mock objects when running tests. Speaking of mock objects, while I still like writing my own, the mock object frameworks that greatly simplify this task have come a long way as well. My two favorites (which take very different approaches) are TypeMock and Moq. NMock and RhinoMock are also great choices that have quite a following.
How do I test in a real web context?
Here as well there are some great new tools that can be used to write true web-context (not mock) driven tests. The first is the Visual Studio Team System Test Edition which supports writing web tests directly in the Visual Studio IDE. The ability to leverage these same tests for load testing is an added benefit as well as easy integration with TeamBuild if you are using that for continuous integration. An even better tool I’ve found is Selenium which lets you compose your tests visually via a FireFox add-on (asserts and all) and then produces code in your target language (e.g. C#) that you can paste directly into your test framework such as nUnit or MSTest.
What about business driven acceptance tests?
Usually your user acceptance tests will be in the form of system testing and probably not great candidates for automated testing. However, there are many situations where the business drives some complex logic or calculations that really should be tested within a unit test. Since the previously mentioned unit test frameworks involve coding, we can’t exactly expect business users to author these tests. However, frameworks like Fit have evolved into viable tools that allow customers and developers to learn what their software should do and what it does do by providing interfaces that allow customers to feed parameters and expected outputs directly into developer provided unit tests and see immediately the test results based on expectations.
There are many tools available today that take much of the tedium out of creating automated unit test suites. With many barriers removed, there is really no reason to not increase your software development quality and efficiency by leveraging the practice of automated unit testing. An excellent resource for those trying to create an organizational standard can be found at SSW.