Wednesday, February 11, 2015


The Pursuit of Enterprise Agility


How do you achieve true agility across an enterprise?

"It takes a village to raise a child." Achieving true enterprise agility is surely not an easy path. It involves changes to existing structures, questioning traditional mind-sets, and envisioning a new way of delivering work. Planning and executing a process change for a single team with a handful of individuals and limited variables and risks is far less challenging when compared to coaching an enterprise with a few hundred or more individuals on a paradigm shift in their thought process and behavior. One can find countless case studies, articles, and journals on the Internet to support this claim. If an organization has an appetite for embracing agility and commits to the necessary steps required to establish a transformation mechanism to achieve the same, why does scaling an Agile behavior across the enterprise seem to become an insurmountable goal? More important, why does an organization so often lose focus from the goal of continuous improvement and quickly slip into an autopilot mode of rinsing and repeating the same tried and tested traditional mind-set that failed them in the first place?

More than a decade of coaching and training several large organizations on all things Agile has led me to believe that achieving enterprise agility begins with asking the right questions. Such an approach may start with conducting initial assessments to understand an organization's readiness to adopt Agile as well as the current state of its use of methods. From a strategic standpoint, these would include interviewing key individuals and groups at various levels in the company on aspects such as: What is the business driver or need behind the organization's look at using Agile? What are the biggest challenges currently (i.e., what problem are they trying to solve)? At a tactical level, these assessments would focus on enquiring about aspects such as: How are the teams structured? What is the scope for the transformation (i.e., what teams may be looking at transitioning to using the Agile approach)? The output of such an assessment is usually a 360-degree view that can help one gauge whether an organization is truly ready for a paradigm shift in its strategy and tactics. Personally, this coaching technique has proven effective many times in leading organizations toward a thought process of self-discovery and helping them make conscious decisions to establish a realizable vision for transformation. Further, sustaining enterprise agility also requires creating a rational strategy to realize such a vision, regularly monitoring and evaluating the investments made for it and executing the strategy by efficiently managing project work and truly embracing Scrum values (i.e., focus, courage, openness, commitment, and respect).

A recipe for success

Organizations often seek out cookie-cutter strategies for process transformations and underestimate the role of valuable parameters such as culture/mind-set, nature of work, factory models (e.g., matrix versus project-centric or functional silos) when defining a vision for change. To sustain an enterprise-wide Agile transformation, it is critical that the company's leadership is educated on what they are getting into and what to expect along the way.

Over the years, I have been asked many times whether there is a blueprint to structure an approach for scaling Agile adoption across an enterprise. Given below are a few Agile enablers that can be leveraged to coach an organization and manage expectations about the transformation journey.

Foundational training across the organization: A positive effort to create an improved business requires an organization to invest in its most valuable resource -- people. Effective foundational Agile training can develop people's potential and in turn generate improved behaviors and techniques to deliver a high-quality product. Hence, an organization's commitment to up-front training is extremely valuable and can prove to be an effective Agile enabler to promote a successful adoption. Also, top management should be trained effectively to back the Agile adoption and be able to remove blocks and foster creativity throughout the organization. It is critical that they understand their role and demonstrate the core Agile values in their behaviors.

Dedicated hands-on coaching: At a tactical level, engaging an Agile coach for hands-on coaching during project execution can prove highly beneficial during the initial few sprints/iterations. A team new to the process might struggle with getting the maximum value out of certain Agile mechanics (e.g., sprint/iteration planning, retrospective, release planning, story mapping, etc.). Hence, a seasoned coach can guide the team toward efficient use of the same. A coach can also help improve self-discipline in a team. For instance, the Agile Manifesto line "Working software over comprehensive documentation" demands that, ideally, a team should deliver working software or a completed product increment every sprint. So, establishing a clear Definition of Done for the team's sprint/iteration goal can enforce this value. A coach can assist the team in carving out such a definition, considering the nature of the team's work and its velocity.

From a strategic standpoint, a coach can assist middle management and senior leadership to build the necessary line of sight for future work required to ensure a continued flow of value to its customers. Additionally, coaching both core and extended team members on the benefits of structuring communication vehicles like Scrum of Scrums, Agile open forums (e.g., Hothouses, Open Spaces, Lean Coffees, etc.) can help an organization periodically gauge its Agile transformation maturity as well as effectively manage its leadership's expectations about the overall enterprise-wide changes.

Also, a team's engineering practices can set a benchmark for its success. For instance, often system testing, integration testing, and defect resolution can bleed over across sprints/iterations, thereby preventing teams from consistently delivering working software at the end of each sprint/iteration. Hence, coaching the team on certain technical practices (e.g., test-driven development, continuous integration, automated build deployments, and automated testing), as well as guiding them on how to craft user stories such that they are small enough to complete and deliver in a sprint/iteration or less and large enough to independently deliver business value (i.e., think INVEST criteria for user stories), can prove critical to completing a sprint/iteration successfully. Hence a coach's expertise can be pivotal to enabling Agile and ensuring sustainability.

Enabling and managing Agile projects through governance: Another key enabler to Agile adoption can be an enterprise-wide governance framework geared to manage Agile projects and drive Agile adoption on new initiatives by carefully assessing their characteristics (e.g., size, risk, complexity, business value, etc.) and providing guidance on whether the project is a good fit for Agile or not. Such a mechanism will encourage new projects to present themselves in ways that everyone (especially senior management) can understand. In essence, standardizing the dimensions on which new projects are assessed will establish a robust mechanism to effectively tie an organization's Agile transformation vision to its project management strategy.

So, is there a blueprint?

Every organization requires a different flavor of counseling when it comes to steering them toward a new way of doing work. Agile tactics focus on delivering incrementally, communicating daily, and increasing collaboration and feedback to help drive results and value to the customer. However, achieving enterprise agility requires an adaptive leadership style that consciously embraces ambiguity, takes risks that disrupt the status quo, institutes new management styles, and expedites decision making across the organization.

In today's global business market, an organization should be able to adapt effectively and efficiently to unexpected changes in order to gain an edge over the competition. Hence, agility should be prioritized as an objective by all levels of an organization and fueled by the requisite processes, policies, and knowledge management techniques.

In a nutshell, there isn't a firm blueprint for achieving true enterprise agility. However, a strategy built on a logical thought process and executed with rational planning can certainly pave the way for success. Agile is a team process, and achieving true agility requires a team effort, no matter how large or small the scale.
- See more at: https://www.scrumalliance.org/community/articles/2014/november/the-pursuit-of-enterprise-agility#sthash.scZekFBK.dpuf

Enterprise Architecture: Increasing Business Architecture ROI

In kicking off our new year, we are taking a fresh look at the way CIOs and Enterprise Architects (EA) look for value and ROI to your organization.
As we continue our enterprise architecture journey with new agile solutions, we are focusing on 2014 best practices, frameworks, and methodologies. In our next several ZapFlash notes, we will offer insights in your architecture domains with a focus on the business and IT value, trending SOA-based solutions, emerging technologies including cybersecurity trends, key Federal IT guidelines, and agile architecture case studies. ZapThink’s focus continues on the agile architecture with SOA-based solutions for a strong foundation.
We are focusing on current challenges and opportunities for architects, CIOs, and IT leaders. Our coverage will consist of the architecture domains with a focus on the business value, new SOA-based solutions, emerging technologies, and developing IT guidelines.
We will look at the business architect (BA) role first. BAs are now in play from the past  “limited” role to a rapid resurrection from the Great Recession to today’s “must have” function to offer greater value for your organization. The use of SOA has helped develop and elevate the BA impact and importance in organizations with reusable business web services for a solid architecture foundation and not “just a bunch of web-services” without the business context and business value. ZapThink explores 2014 trends for the CIO and EA team insights and recommendations for why you should rethink your business architecture function regarding its value proposition and ROI to your organization. We believe the business architecture component investments will be critical for business innovation for corporate executives to drive profits and bottom-line earnings for growth or maximizing ROI value from your austere government budgets in 2014.
Understanding the Enterprise for the Business Architecture’s Value
Most organizations use an enterprise architecture framework that consists of the four domains of the architecture – application, business, information, and technology.
Figure 1 – Typical EA Example
However, we find most enterprise architects will tailor the enterprise framework that best fits their organization, communications, and culture. For example, National Institute of Health uses only three domains: business, information, and technology as illustrated below.
We find the key to most EAs is to use a framework that can be articulated and communicated in the organizational culture that ties its business operations and mission to its stakeholders and customers. We selected this example because it clearly focuses on the business architecture with the information and technical architectures as the IT enablers. From early pioneering enterprise architecture work at the National Institute of Standards and Technology, the business architecture had a clear focus on a business model for the federal government, which is a similar construct for industry.
Today, the enterprise architecture has been extended and tailored to both industry and federal organizations in their enterprise architecture. However, the business architecture has evolved with more rigor and better alignment for value and ROI.
Over last five years, we have found an EA shift to the critical role of the BA and its maturity in newer frameworks, models, and standards/ guidelines for best practices. Today, we see them both – CIOs and EAs – trending to allocate more resources to the BA function for innovation and agile delivery solutions.
Figure 2 – NIH’s Enterprise Architecture Framework
BAs typically use a disciplined approach in most mature organizations with agile, scalable, and useable models to drive and realize business goals for the supporting organization’s foundation of the enterprise. This is where most of today’s heavy lifting is being reshaped for tomorrow’s agile enterprise world. BAs need to focus on creating value to drive value realization as the outcome for our annual work plan for the organization. The value model below illustrates the need to validate each effort for business value to achieve business realization and direct benefits for our customers. This model is comprehensive, fits with the BA role, and is well-accepted type of concept as it covers the value planning, value creation, and value realization process illustrated below.
Simply put, BAs must align and drive the business strategy from the C-suite for realization of the expected business goals and mission outcomes.
Not all organizations are firing on all cylinders with 21st century business architecture because they are still focusing too much on technology, such as the infrastructure operations. Mostly likely, CIO and IT leadership bear responsibility for this misalignment in organizations.
We see a shift of many CIOs and EAs to a focus on business application overhaul with enterprise portfolio management, data management, innovation, and agile delivery of new strategic products and services dictated by the business owners. To be fair, there are a number of EAs and IT leaders entrenched in allocating too much time and attention in other areas, such as lower level operations and short-term firefighting in today’s leaned-out IT departments.
Increasing Value Proposition of Business Architects
In order for BAs to add value to the organization, they must have CIOs in the right leadership roles. Today’s CIO discussions with CEOs often lead us to what type of EA and IT leadership team is right for the organization. The 2014 CEO focus is on growing revenue and the firm’s earnings each year whereby the BA role is instrumental.
Today, your BA must be leveraged to improve the corporate strategy with the IT leader team. It encompasses support for key new development for application overhaul, organizational capability and design, enterprise planning, performance analysis with key metrics, operating models for innovation and refresh for change management. CIOs are now leveraging BAs to help streamline enterprise portfolio management, business case management analysis, as well as continuous business process management for better organizational alignment.
Figure 3 – Mark Von Rosing Value Model
This alignment defines the right type of BA who will support the business ecosystem to execute the C-suite corporate strategies for adding value. We see BA skills and open source frameworks and tools continue to strengthen the business architecture for the enterprise in 2014.
The overall business architecture continues to mature especially with the Business Architect Guild (AKA Guild) to promote best practices, emerging practices, and expand the architect knowledge base with tools, such as Business Architect Body of Knowledge (BIZBOK TM ). BIZBOK Version 3.0 has created much interest with its further expanded Version 3.5 just released in January 2014. Overall, we believe it is a well thought-out and comprehensive guide that can easily be adopted by BAs. The BIZBOK will allow BAs to align with key stakeholders with a line-of-sight to the strategy plan of the business architecture function.OMG established the Business Architectural Special Interest Group (BASIG) to create a forum and bridge the work of the Guild in collaborating to mature the BIZBOK for a wider and more accepted audience with the standard’s group.
To sum, CEOs are still struggling to adopt the strong role of the CIO and the EA team to help support the enterprise execution plans. Considering the high-value business targets and stronger ROI realization from the CIO, the BA team is now in the best position for 2014.
Marching to 2014 Business Architect Demand and Beyond
CEOs have used up most of their revenue and earnings tools on aggressive cost-cutting programs and stock buy-backs to recover from the Great Recession. Looking at early 2014 corporate results with a mostly flat corporate earnings outlook, CEOs will be re-focused on innovation with technology drivers to boost earnings with more refreshed products and services offerings. Hence, they will need to bring back the EA Teams with the best-in-class BAs that were most likely cut loose over the market recovery to fill the current business transformational gaps. We predict that these steps will lead to more Capex investments, as the economy recovers, requiring the appropriate BA talent with agile enterprise architecture skills in industry and federal government. We expect CIOs to move out on BA funding and new hires with higher salaries for agile enterprise skills after being frozen in place the past five years.
ZapThink Take
Planning forward in 2014, corporation and government spending will need to be increased for both EA and BA efforts. It will mitigate effects of aggressive cost cutting and ultimately shortsighted budget savings realized during the Great Recession.
Today’s businesses and governments have a huge appetite for corporate IT talent to plan and execute stakeholder and C-suite complex requirements bridging a broad emerging technology moat for tomorrow’s solutions. The smart money is on staffing up your EA team with a focus on BA with agile enterprise architecture skills now.
The resurgence of EA ROI and value proposition will add in the bench strength for innovation and agile execution for refreshed products and services to keep your CIO happy and employed longer. You need as many new competitive advantages and agile enterprise delivery models for newer products and services for your stakeholder team. It’s time to recognize and leverage your refreshed BA team to deliver on all cylinders for this year.

The Need for Smart Data Visualization

Just because you can do data visualization, should you?
Big Data is a big topic! It is one of the most popular buzzwords in the tech world today. From finance and banking, genomics and healthcare, to marketing and communications, nearly all industries want to utilize data to drive business decisions. Advances in communications, social networking, and information technology have fueled a tsunami of Big Data paving the way for development of interactive data visualization tools.
Traditionally, data visualization tools were static, non-interactive graphs and tables that were a staple in board rooms. They provided a visual representation of the data, but required more time to analyze and understand the data. Further, the traditional data visualization tools could be error prone and often required in-depth knowledge of the application in use. The increased interest and advancements we are experiencing with interactive data visualization can be attributed to:
• Advances in computational power, data analysis, and graphics which have enabled widespread access to data visualization products
• Generation and availability of large amounts of data which cannot be easily analyzed by traditional methods
• A need for rapid analysis and decisions on the large amount of data that is generated within an organization
Interactive tools…the answer or a step in the right direction?Interactive tools afford a better understanding of relationships and trends in data sets and allow a quick drill down of data to the smallest unit. These tools were initially developed as ad hoc solutions by organizations to address a specific question within a specific set of data and have gained tremendous popularity. Consequently, companies (regardless of the size) are racing to develop better, faster data visualization tools and in turn, fueling an almost irrational expectation that data visualization is the magic-bullet for tackling Big Data. While these expectations may be warranted based on some of the success stories, it is imperative that data analysts and programmers ensure they are asking the right questions and utilizing the right methods in order to generate valuable analyses of the data. Immediate, narrowly focused answers will never provide the desired big picture solution to Big Data.
Quality interactive tools – key considerations for big picture solutions…We have all heard about the challenges related to the volume of semi-structured and unstructured data that is being driven by the popularity and ease of use of mobile devices and platforms like Twitter, Facebook, Tumblr, etc. Currently, a number of standalone products are available to analyze and consume the semi-structured and unstructured data. Going forward, these solutions should be incorporated and offered as part of a comprehensive data visualization solutions suite.
Technology advancements have positively impacted the user friendliness of data visualization tools as they no longer require the data managers/analysts to be computer scientists. While this is a positive enhancement, we cannot neglect to recognize that it is becoming increasingly important that the consumers of data visualization tools become savvy and comfortable with using the tools. These consumers need to become data scientists – in addition to analyzing the data; they have to look for patterns, hypotheses, outliers, and unusual trends to draw inferences.
Search Image Theory
Human visual perception capabilities are often overlooked by data visualization vendors. The human visual system is endowed with tremendous abilities to see patterns and make decisions (the animal kingdom uses a similar behavior in prey detection; refer to the Search Image Theory exhibit1). These abilities are governed by certain rules with regard to size, shape, color, and proximity of the objects. There has been an extensive amount of research conducted and there is a large amount of data available in the field of human visual systems and cognition. Incorporating this research and these concepts into the design and development of data visualization tools will only strengthen the capabilities of the tools.
In an ideal world, data visualization tools should not only provide information on what is expected but also help to decipher what is not expected. The tools should be a means to identify outliers and unusual trends, account for various types of data (i.e. structured vs. unstructured), utilize the appropriate analysis methodology (statistical understanding), and incorporate human visual perception. Then and only then will the data visualization tools help with decision support and lead to better management by exception.
Our experiences with data visualization in an adverse event management system highlighted a number of the issues discussed above and prompted us to address them. In doing so, we were able to provide our client with tools to attain greater levels of efficiency through a cost-effective, low-risk solution.

FAIR open sources deep-learning modules for Torch

Progress in science and technology accelerates when scientists share not just their results, but also their tools and methods. This is one of the reasons why Facebook AI Research (FAIR) is committed to open science and to open sourcing its tools.
Many research projects on machine learning and AI at FAIR use Torch, an open source development environment for numerics, machine learning, and computer vision, with a particular emphasis on deep learning and convolutional nets. Torch is widely used at a number of academic labs as well as at Google/DeepMind, Twitter, NVIDIA, AMD, Intel, and many other companies.
Today, we're open sourcing optimized deep-learning modules for Torch. These modules are significantly faster than the default ones in Torch and have accelerated our research projects by allowing us to train larger neural nets in less time.
This release includes GPU-optimized modules for large convolutional nets (ConvNets), as well as networks with sparse activations that are commonly used in Natural Language Processing applications. Our ConvNet modules include a fast FFT-based convolutional layer using custom CUDA kernels built around NVIDIA's cuFFT library. We'll discuss a few more details about this module lower in this post; for a deeper dive, have a look at this paper.
In addition to this module, the release includes a number of other CUDA-based modules and containers, including:
  • Containers that allow the user to parallelize the training on multiple GPUs using both the data-parallel model (mini-batch split over GPUs), or the model-parallel model (network split over multiple GPUs).
  • An optimized Lookup Table that is often used when learning embedding of discrete objects (e.g. words) and neural language models.
  • Hierarchical SoftMax module to speed up training over extremely large number of classes.
  • Cross-map pooling (sometimes known as MaxOut) often used for certain types of visual and text models.
  • A GPU implementation of 1-bit SGD based on the paper by Frank Seide, et al.
  • A significantly faster Temporal Convolution layer, which computes the 1-D convolution of an input with a kernel, typically used in ConvNets for speech recognition and natural language applications. Our version improves upon the original Torch implementation by utilizing the same BLAS primitives in a significantly more efficient regime. Observed speedups range from 3x to 10x on a single GPU, depending on the input sizes, kernel sizes, and strides.

FFT-based convolutional layer code

The most significant part of this release involves the FFT-based convolutional layer code because convolutions take up the majority of the compute time in training ConvNets. Since improving training time of these models translates to faster research and development, we've spent considerable engineering effort to improve the GPU convolution layers. The work has produced notable results, achieving speedups of up to 23.5x compared to the fastest publicly available code. As far as we can tell, our code is faster than any other publicly available code when used to train popular architectures such as a typical deep ConvNets for object recognition on the ImageNet data set.
The improvements came from building on insights provided by our partners at NYU who showed in an ICLR 2014 paper , for the first time, that doing convolutions via FFT can give a speedup in the context of ConvNets. It is well known that convolutions turn into point-wise multiplications when performed in the Fourier domain, but exploiting this property in the context of a ConvNet where images are small and convolution kernels are even smaller was not easy because of the overheads involved. The sequence of operations involves taking an FFT of the input and kernel, multiplying them point-wise, and then taking an inverse Fourier transform. The back-propagation phase, being a convolution between the gradient with respect to the output and the transposed convolution kernel, can also be performed in the Fourier domain. The computation of the gradient with respect to the convolution kernels is also a convolution between the input and the gradient with respect to the output (seen as a large kernel).
We've used this core idea and combined it with a dynamic auto-tuning strategy that explores multiple specialized code paths. The current version of our code is built on top of NVIDIA's cuFFT library. We are working on an even faster version using custom FFT CUDA kernels.

The visualizations shown here are color-coded maps that show the relative speed up of Facebook's ConvolutionFFT vs NVIDIA's CuDNN when timed over an entire round trip of the forward and back propagation stages. The heat map is red when we are slower and green when we are faster, with the color amplified according to the magnitude of speedup.
For small kernel sizes (3x3), the speedup is moderate, with a top speed of 1.84x faster than CuDNN.

For larger kernel sizes, starting from (5x5), the speedup is considerable. With larger kernel sizes (13x13), we have a top speed that is 23.5x faster than CuDNN's implementations.

Moreover, when there are use cases where you convolve with fairly large kernels (as in an example in this paper from Jonathan J. Tompson et al, where they use 128x128 convolution kernels), this path is a practically viable strategy.

The result you see is some of the fastest convolutional layer code available (as of the writing of this post), and the code is now open sourced for all to use. For more technical details on this work, you are invited to read our Arxiv paper.

Parallelization over Multiple GPUs

From the engineering side, we've also been working on the ability to parallelize training of neural network models over multiple GPU cards simultaneously. We worked on minimizing the parallelization overhead while making it extremely simple for researchers to use the data-parallel and model-parallel modules (that are part of fbcunn). Once the researchers push their model into these easy-to-use containers, the code automatically schedules the model over multiple GPUs to maximize speedup. We've showcased this in an example that trains a ConvNet over Imagenet using multiple GPUs.


Integrating the real World into the Web

How do you layer a programmable Internet of smart things on top of the web? That's the question addressed by Dominique Guinard in his ambitious dissertation: A Web of Things Application Architecture - Integrating the Real-World (slides). With the continued siloing of content, perhaps we can keep our things open and talking to each other?
In the architecture things are modeled using REST, they will be findable via search, they will be social via a social access controller, and they will be mashupable. Here's great graphical overview of the entire system:

Abstract:

A central concern in the area of pervasive computing has been the integration of digital artifactsts with the physical world and vice-versa. Recent developments in the fi eld of embedded devices have led to smart things increasingly populating our daily life. We de fine smart things as digitally enhanced physical objectsts and devices that have communication capabilities. Application domains are for instance wireless sensor and actuator networks in cities making them more context-aware and thus smarter. New appliances such as smart TVs, alarm clocks, fridges or digital-picture frames make our living-rooms and houses more energy e cient and our lives easierer. Industries bene fit from increasingly more intelligent machines and robots. Usual objects tagged with radio-tags or barcodes become linked to virtual information sources and o er new business opportunities.
As a consequence, Internet of Things research is exploring ways to connect smart things together and build upon these networks. To facilitate these connections, research and industry have come up over the last few years with a number of low-power network protocols. However, while getting increasingly more connected, embedded devices still form multiple, small, incompatible islands at the application layer: developing applications using them is a challenging task that requires expert knowledge of each platform. As a consequence, smart things remain hard to integrate into composite applications. To remedy this fact, several service platforms proposing an integration architecture appeared in recent years. While some of them are successfully implemented on some appliances and machines, they are, for the most part, not compatible with one another. Furthermore, their complexity and lack of well-known tools let them only reach a relatively small community of expert developers and hence their usage in applications has been rather limited.
On the other hand, the Internet is a compelling example of a scalable global network of computers that interoperate across heterogeneous hardware and software platforms. On top of the Internet, the Web illustrates well how a set of relatively simple and open standards can be used to build very flexible systems while preserving e ciency and scalability. The cross-integration and developments of composite applications on the Web, alongside with its ubiquitous availability across a broad range of devices (e.g., desktops, laptops, mobile phones, set-top boxes, gaming devices, etc.), make the Web an outstanding candidate for a universal integration platform. Web sites do not o er only pages anymore, but Application Programming Interfaces that can be used by other Web resources to create new, ad-hoc and composite applications running in the computing cloud and being accessed by desktops or mobile computers.
In this thesis we use the Web and its emerging technologies as the basis of a smart things application integration platform. In particular, we propose a Web of Things application architecture off ering four layers that simplify the development of applications involving smart things. First, we address device accessibility and propose implementing, on smart things, the architectural principles that are at the heart of the Web such the Representational State Transfer (REST). We extend the REST architecture by proposing and implementing a number of improvements to the special requirements of the physical world such as the need for domain-speci c proxies or real-time communication.
In the second layer we study findability: In a Web populated by billions of smart things, how can we identify the devices we can interact with, the devices that provide the right service for our application? To address these issues we propose a lightweight metadata format that search engines can understand, together with a Web-oriented discovery and lookup infrastructure that leverages the particular context of smart things.
While the Web of Things fosters a rather open network of physical objects, it is very unlikely that in the future access to smart things will be open to anyone. In the third layer we propose a sharing infrastructure that leverages social graphs encapsulated by social networks. We demonstrate how this helps sharing smart things in a straightforward, user-friendly and personal manner, building a Social Web of Things.
Our primary goal in bringing smart things to the Web is to facilitate their integration into composite applications. Just as Web developers and tech-savvies create Web 2.0 mashups (i.e., lightweight, ad-hoc compositions of several services on the Web), they should be able to create applications involving smart things with similar ease. Thus, in the composition layer we introduce the physical mashups and propose a software platform, built as an extension of an open-source work ow engine, that o ers basic constructs which can be used to build mashup editors for the Web of Things.
Finally, to test our architecture and the proposed tools, we apply them to two types of smart things. First we look at wireless sensor networks, in particular at energy and environmental monitoring sensor nodes. We evaluate the benefi ts of applying the proposed architecture fi rst empirically by means of several prototypes, then quantitatively by running performance evaluations and finally qualitatively with the help several developers who used our frameworks to develop mobile and Web-based applications. Then, to better understand and evaluate how the Web of Things architecture can facilitate the development of real-world aware business applications, we study automatic identi cation systems and propose a framework for bringing RFID data to the Web and global RFID information systems to the cloud. We evaluate the performance of this framework and illustrate its benefi ts with several prototypes.
Put together, these contributions materialize into an ecosystem of building-blocks for the Web of Things: a world-wide and interoperable network of smart things on which applications can be easily built, one step closer to bridging the gap between the virtual and physical worlds.

Software Architecture AntiPatterns

Architecture AntiPatterns focus on the system-level and enterprise-level structure of applications and components. Although the engineering discipline of software architecture is relatively immature, what has been determined repeatedly by software research and experience is the overarching importance of architecture in software development:

Good architecture is a critical factor in the success of the system development. Architecture-driven software development is the most effective approach to building systems. Architecture-driven approaches are superior to requirements-driven, document-driven, and methodology-driven approaches. Projects often succeed in spite of methodology, not because of it. Software architecture is a subset of the overall system architecture, which includes all design and implementation aspects, including hardware and technology selection. Important principles of architecture include the following:

Architecture provides a view of the whole system. This distinguishes architecture from other analysis and design models that focus on parts of a system. An effective way to model whole systems is through multiple viewpoints. The viewpoints correlate to various stakeholders and technical experts in the system-development process. The following AntiPatterns focus on some common problems and mistakes in the creation, implementation, and management of architecture.

Autogenerated Stovepipe: This AntiPattern occurs when migrating an existing software system to a distributed infrastructure. An Autogenerated Stovepipe arises when converting the existing software interfaces to distributed interfaces. If the same design is used for distributed computing, a number of problems emerge.

Stovepipe Enterprise: A Stovepipe System is characterized by a software structure that inhibits change. The refactored solution describes how to abstract subsystem and components to achieve an improved system structure. The Stovepipe Enterprise AntiPattern is characterized by a lack of coordination and planning across a set of systems.

Jumble: When horizontal and vertical design elements are intermixed, an unstable architecture results. The intermingling of horizontal and vertical design elements limits the reusability and robustness of the architecture and the system software components.

Stovepipe System: Subsystems are integrated in an ad hoc manner using multiple integration strategies and mechanisms, and all are integrated point to point. The integration approach for each pair of subsystems is not easily leveraged toward that of other subsystems. The Stovepipe System AntiPattern is the single-system analogy of Stovepipe Enterprise, and is concerned with how the subsystems are coordinated within a single system.

Cover Your Assets: Document-driven software processes often produce less-than-useful requirements and specifications because the authors evade making important decisions. In order to avoid making a mistake, the authors take a safer course and elaborate upon alternatives.

Vendor Lock-In: Vendor Lock-In occurs in systems that are highly dependent upon proprietary architectures. The use of architectural isolation layers can provide independence from vendor-specific solutions.

Wolf Ticket: A Wolf Ticket is a product that claims openness and conformance to standards that have no enforceable meaning. The products are delivered with proprietary interfaces that may vary significantly from the published standard.

Architecture by Implication: Management of risk in follow-on system development is often overlooked due to overconfidence and recent system successes. A general architecture approach that is tailored to each application system can help identify unique requirements and risk areas.

Warm Bodies: Software projects are often staffed with programmers with widely varying skills and productivity levels. Many of these people may be assigned to meet staff size objectives (so-called “warm bodies”). Skilled programmers are essential to the success of a software project. So-called heroic programmers are exceptionally productive, but as few as 1 in 20 have this talent. They produce an order of magnitude more working software than an average programmer.

Design by Committee: The classic AntiPattern from standards bodies, Design by Committee creates overly complex architectures that lack coherence. Clarification of architectural roles and improved process facilitation can refactor bad meeting processes into highly productive events.

Swiss Army Knife: A Swiss Army Knife is an excessively complex class interface. The designer attempts to provide for all possible uses of the class. In the attempt, he or she adds a large number of interface signatures in a futile attempt to meet all possible needs.

Reinvent the Wheel: The pervasive lack of technology transfer between software projects leads to substantial reinvention. Design knowledge buried in legacy assets can be leveraged to reduce time-to-market, cost, and risk.

The Grand Old Duke of York: Egalitarian software processes often ignore people’s talents to the detriment of the project. Programming skill does not equate to skill in defining abstractions. There appear to be two distinct groups involved in software development: abstractionists and their counterparts the implementationists

Software Development AntiPatterns

Good software structure is essential for system extension and maintenance. Software development is a chaotic activity, therefore the implemented structure of systems tends to stray from the planned structure as determined by architecture, analysis, and design. Software refactoring is an effective approach for improving software structure. The resulting structure does not have to resemble the original planned structure.

The structure changes because programmers learn constraints and approaches that alter the context of the coded solutions. When used properly, refactoring is a natural activity in the programming process. For example, the solution for the Spaghetti Code AntiPattern defines a software development process that incorporates refactoring. Refactoring is strongly recommended prior to performance optimization. Optimizations often involve compromises to program structure. Ideally, optimizations affect only small portions of a program. Prior refactoring helps partition optimized code from the majority of the software.

Development AntiPatterns utilize various formal and informal refactoring approaches. The following summaries provide an overview of the Development AntiPatterns found in this chapter and focus on the development AntiPattern problem. Included are descriptions of both development and mini-AntiPatterns. The refactored solutions appear in the appropriate AntiPattern templates that follow the summaries.

The Blob: Procedural-style design leads to one object with a lion’s share of the responsibilities, while most other objects only hold data or execute simple processes. The solution includes refactoring the design to distribute responsibilities more uniformly and isolating the effect of changes.

Continuous Obsolescence: Technology is changing so rapidly that developers often have trouble keeping up with current versions of software and finding combinations of product releases that work together. Given that every commercial product line evolves through new releases, the situation is becoming more difficult for developers to cope with. Finding compatible releases of products that successfully interoperate is even harder.

Lava Flow: Dead code and forgotten design information is frozen in an ever-changing design. This is analogous to a Lava Flow with hardening globules of rocky material. The refactored solution includes a configuration management process that eliminates dead code and evolves or refactors design toward increasing quality.

Ambiguous Viewpoint: Object-oriented analysis and design (OOA&D) models are often presented without clarifying the viewpoint represented by the model. By default, OOA&D models denote an implementation viewpoint that is potentially the least useful. Mixed viewpoints don’t allow the fundamental separation of interfaces from implementation details, which is one of the primary benefits of the object-oriented paradigm.

Functional Decomposition: This AntiPattern is the output of experienced, nonobject-oriented developers who design and implement an application in an object-oriented language. The resulting code resembles a structural language (Pascal, FORTRAN) in class structure. It can be incredibly complex as smart procedural developers devise very “clever” ways to replicate their time-tested methods in an object-oriented architecture.

Poltergeists: Poltergeists are classes with very limited roles and effective life cycles. They often start processes for other objects. The refactored solution includes a reallocation of responsibilities to longer-lived objects that eliminate the Poltergeists.

Boat Anchor: A Boat Anchor is a piece of software or hardware that serves no useful purpose on the current project. Often, the Boat Anchor is a costly acquisition, which makes the purchase even more ironic.

Golden Hammer: A Golden Hammer is a familiar technology or concept applied obsessively to many software problems. The solution involves expanding the knowledge of developers through education, training, and book study groups to expose developers to alternative technologies and approaches.

Dead End: A Dead End is reached by modifying a reusable component if the modified component is no longer maintained and supported by the supplier. When these modifications are made, the support burden transfers to the application system developers and maintainers. Improvements in the reusable component are not easily integrated, and support problems can be blamed upon the modification.

Spaghetti Code: Ad hoc software structure makes it difficult to extend and optimize code. Frequent code refactoring can improve software structure, support software maintenance, and enable iterative development.

Input Kludge: Software that fails straightforward behavioral tests may be an example of an input kludge, which occurs when ad hoc algorithms are employed for handling program input.

Walking through a Minefield: Using today’s software technology is analogous to walking through a high-tech mine field. Numerous bugs are found in released software products; in fact, experts estimate that original source code contains two to five bugs per line of code.

Cut-and-Paste Programming: Code reused by copying source statements leads to significant maintenance problems. Alternative forms of reuse, including black-box reuse, reduce maintenance issues by having common source code, testing, and documentation.

Mushroom Management: In some architecture and management circles, there is an explicit policy to keep system developers isolated from the system’s end users. Requirements are passed second-hand through intermediaries, including architects, managers, or requirements analysts.

Security - OWASP Top 10





What Are Application Security Risks?




Excellent Books

Architecture, patterns and programming skills

Architecture and Design

Programming and code

Testing

Database

Technology specific

Java

Groovy/Grails

ORM and Database

Process and methodology

People & Process for beginners

  • Ship it! is a good introductory book for doing Agile in practice. It's not as technical as pragmatic programmer, more project/soft oriented.

People & Process advanced

People & Process classics

Consulting

Other

Computer history

Off-topic, but related books

How to write text

Resources