2014-10-27

Vincent Delaroche posted a blog post

App Dev Productivity and Quality: The IT Exec’s Guide to Software Analysis and Measurement

Creating, enhancing and managing custom applications is a major business process within the enterprise. IT executives are starting to introduce more robust metrics to manage this business process in a more professional way. Whether you want to measure the productivity of your development teams, measure the structural quality of your business-critical systems, or prevent software risk from leading to major outages or data corruption, a software analysis and measurement system is becoming de rigueur.In essence, measurement provides you with analytics and visibility to support better judgment and decision making (See “ The economic impact of ADM quality and productivity measurement”). Whether it is about your teams, suppliers, or software assets, shooting in the dark is no longer an option, but wounding the wrong guy or the wrong app is even worse. Software analysis and measurement platforms analyze application source code and software architecture to determine technical risk, complexity, resiliency, changeability and technical debt. These platforms drive improvements on two fronts: team productivity and software quality and risk.On productivity, some execs have jumped at the -- always appealing -- simplicity of counting lines of code, but that just doesn’t work as highly productive teams, through smart reuse and elegant engineering, produce fewer lines of code. While bad developers can be very good at producing tons of linear code to appease the gods of line counting. Today, the good news is that a new standard by the Object Management Group (OMG) for computable Function Points has been a real game changer, boosting the adoption of function points to measure productivity.On software quality and risk, tool vendors and open source communities provide viable solutions for checking code quality -- such as syntax, readability, good practices and such, but the rub here is that the exact same piece of code can be safe or highly dangerous, depending on the context in which it operates. Checking the quality of the code within a file without any architectural context will never tell you if a critical system is secure, rock solid, efficient or easy to change. “System-level” analysis capabilities allow deep understanding of what the code “does” when dealing with others components and tech layers. Since this is an area that’s still new to most technology executives, I thought it valuable to provide some hints as to what they should be looking for. Below are key requirements for building or buying an enterprise-grade software measurement engine, followed by some advice on implementing it successfully in large IT organizations.A software analysis platform must:Provide meaningful, business relevant analytics, as opposed to scientific hieroglyphs.Be accurate to not lead you to blame good guys and reward the bottom of the bell curve.Be precise, especially when interpreting trends.Have high-resolution capabilities, to find the little glitch that could damage you.Be scalable across your portfolio, measuring 80% of your ADM spending.Be credible, to avoid any initial pushback against measurement.Be owned by you, because you can outsource everything but your judgment.Meaningful Analytics: Software analytics must be business relevant. A cyclomatic complexity of 3.74 combined with Halstead or McCabe’s metrics may be intellectually stimulating, but quite difficult for IT executives to act upon. Software analytics that speak for themselves (Resilience, Security, Efficiency, and Maintainability) have been recently standardized by the Consortium for IT Software Quality (CISQ), an organization founded by the Software Engineering Institute of the Carnegie Mellon University http://www.sei.cmu.edu , and the OMG http://www.omg.org. Any measurement system intending to deliver actionable metrics should adopt them.Additionally, the ability to benchmark metrics against industry is important. It is more meaningful for an exec to hear that the Robustness rating of his business-critical app has dipped to a score of 2.0 while his industry peers are benchmarked at 3.2, than to look at a score without any valid reference or context. If you don’t know where you are, a map is useless.The ability to enhance productivity analytics with quality metrics is also extremely valuable. For instance, being able to see that your Bangalore team is producing 1000 more function points a month than the same size team in Vietnam at roughly the same level of structural quality provides a balanced scorecard for continuously improving IT performance.Accuracy: The accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual value. I’ve heard senior IT execs repeatedly state that their code quality initiatives have not produced the expected results: “Code quality dashboards are all green and great, but my systems are still poor and getting worse...” That’s again because the aggregation of code quality scores at a unit-program level does not reflect the true health of the application. The hard-to-hear truth is that developers may all produce high quality code locally, while inadvertently concocting systems that are doomed to fail for structural reasons. Dr. Richard Soley, the CEO of OMG and a well-known pioneer in business software architecture wrote, “Basic Unit Level errors account for 92% of the total errors in the source code. These code level issues eventually count for only 10% of the defects in production. On the other hand, bad software engineering practices at System Level account for only 8% of total defects, but consume over half the effort spent on fixing problems, and eventually lead to 90% of the serious reliability, security, and efficiency issues in production” ("How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems")Accurate assessment requires system-level analysis, a natural complement to code quality analysis yet an entirely different analytical process. This is a basic distinction known to software engineers and even elaborated in the Wikipedia entry for “Software Quality” with numerous academic references.Precision: Precision of a measurement system defines the degree to which repeated measurements under unchanged conditions show the same results. Precision over time is absolutely crucial in software. There may not seem to be much difference between a score of 3.5 and 3.4 out of 4. But a trend of a 0.1 decline per quarter in structural quality on a system managed by a multi-million dollar development team is actually quite significant. Especially if the SLAs with ADM suppliers are based on these scores. Precision depends on the degree of understanding of the code analyzed and on the inclusion of an assessment of the interfaces between various technology layers within system.High-Resolution: Analytical “resolution” can be defined as the smallest change in the underlying analysis that will produce a response in the measurement. In software, the notion of resolution is fundamental because a market-facing app that handles ultra-sensitive data may score well in “security”, yet still have, buried deep in its substrata, a couple little glitches that can lead to dramatic consequences, and kill your job. The measurement model must be sensitive enough to react to the introduction of such critical violations.In software engineering, a measurement engine has to be considered as valid if it is accurate, precise, and performs high resolution analysis. It’s relatively easy to get a “rough” measure of a system through basic syntactic analysis, and such approach may actually be acceptable at the portfolio level, where a macro-analysis can give an “idea” of the situation. However, when you have to make a judgment call on team performance or about protecting the business against software failure, it’s black and white. It’s all about being right or wrong, or to find or not to find (the glitch). There is no room for approximation.Scalability: Scaling a software measurement practice across large IT organizations requires analysis of all the common tech silos...J2EE, .NET, Web technologies, COBOL, et. al., as well as TP monitors, database structures and languages, and package customization languages like ABAP. Code analyzers are a good starting point to analyze one file at a time. The story becomes more complex when tying together multiple versions of software frameworks. For example, a regular Java - Oracle application may actually involve JDK 1.4 & 5.0, JSP, Struts, STXX, EJB, JDO0, WSDL, Spring, Hibernate, Oracle 8, 9.x, 10g and 10g R2. All of which may run in parallel within one organization and require subtle analysis adjustments to ensure measures stay accurate. Further, new versions are constantly deployed and ongoing maintenance of the analyzers is a must to avoid obsolescence. Ensuring the code analyzers keep up is a third of the battle. Architecture analysis of transactions across the system needs to also be maintained. All these languages use different syntaxes, grammars and technical concepts, and their corresponding analyzers require internal modifications to be able to connect the dots to track system-level transactions. The evolution of each language analyzer you rely on must be in synch with your architecture analysis capabilities.Another obvious aspect to scalability is the ability to analyze the monster applications that run the business. The software analysis must be able to handle a 4-5 million line of code application without blowing up.Finally, a large part of the TCO of a measurement system stems from the work to implement and run it. From the collection of the code and data structures to the analytics in your hands, the platform should offer a mechanism to collect and manage all the “materials” to be analyzed, automate integrity pre-checks, the analysis itself, and the quality control of the results. Lack of automation will lead to wasted recurring maintenance and operations costs due to excessive manual tasks.Credibility: Introducing measurement is not an easy task in any organization. Headwinds and pushback will arise. They could come from an app owner who doesn’t like the idea of putting his “creation” under the microscope, from a supplier which may contest the scores, or from anyone who may prefer to stay in his or her comfort zone. To ease the acceptance, analytics must have industry-based credibility and it’s useful to rely on indisputable institutions. The most advanced body of software quality standards is the CISQ, http://it-cisq.org/, founded by SEI and OMG. OMG is well known for having standardized the broadly accepted CORBA or UML, and the CISQ is today supported by many large corporations, global IT services providers, and numerous experts in software process improvement. CISQ is driving standardization efforts, and has published software engineering rules and a measurement model, working at the code and system levels. Promoting a “CISQ compliant measurement system” will raise the credibility of any measurement initiative.Deployment and Operations: The last, but not least, vital aspect of executing a measurement program is whether it is correctly deployed and run. For the app analysis itself, the best practice is to create a centralized analysis center, managed by technically competent engineers. It is recommended to assign dedicated resources close to project teams that educate and promote the analytics produced - turning them into business value.Once the measurement service is set up, you must decide who should own and promote it. Tech leads will often favor the bottom up approach. Management consultants and Lean gurus will assure you it can’t be done other than top down. Common sense and field experience tell me it should be owned by the management or by a neutral third party, as it is often seen in other engineering disciplines such as aerospace, healthcare, and construction with the notion of IV&V. It is also matter of opinion and management style. In this humble CEO’s opinion, self-assessment by the worker is just one more utopian idea developed by Karl Marx in his ‘Das Kommunistische Manifest’ opus. If you worry about creating conflict within your organization, a good approach is to equip the developers with the best code analysis tools working at the source file level – there are some decent, cheap open source tools for that – while making them aware that the whole app will be analyzed and measured, right after the build phase, providing them good feedback on structural and architectural quality. Then, reward high-performers. In our experience it’s always the most respected teams who have the highest quality scores once measurement is introduced. Ideally, a “VP of ADM Analytics” should be appointed to lead the charge, and a positive internal communication targeting apps scoring high in structural quality will create positive momentum. And, always keep in mind that the net value delivered by measurement is directly proportional to the amount of pushback.Finally, if you don’t want to run your own software analysis and measurement engine and create this new function, you can go to the cloud. “Software Analytics as a Service”, delivered in managed services mode, is being offered by a growing number of global system integrators and consulting firms. This approach provides all the required enforcement of security and compliance controls; with faster time to value.This is it. During a recent interview on CBS that struck me, Charlie Rose asked to Bill Gates what the “one thing” was he learned in his entire entrepreneurial and philanthropic life. After a brief pause, Gates said: “Measurement… I believe it is crucial to be able to measure how we are doing things. The places we’ve done well are where we’ve really been able to see what’s going on”. (CBS This Morning. Bill Gates on Technology and Philanthropy. Feb 18, 2013.)Vincent Delaroche is the founder of CAST.See More

Show more