Overcoming Technical Challenges for Adopting Agile Methods in the Enterprise
Adapting agile methods within the enterprise is a challenging task. Agility isn’t a software program that can just be installed one day. Instead it has to be adapted to the enterprise context including the cultural, technical, and organizational facets. This article explores challenges associated with setting up development environments, automated testing, continuous integration, and specifying the definition of done within the context of the enterprise.
Development Environment Setup
Every technical lead and development manager wants to reduce the time taken to setup the development environment for their team members. Still, developers continue to spend many cycles trying to get things organized in order to get productive in their projects. Lack of documentation on setting up the development environment is a key reason why the setup time is large. The second key reason is the number of manual steps involved in the setup process. So how does one overcome these challenges? For documentation there are few tenets I follow – simplicity, attention to detail, and automation. Simplicity refers to keeping the document simple to create, maintain, view, and distribute. I use a wiki to manage the content related to setting up my team’s development environment. The wiki page has an owner and as part of the iteration will get updated. Attention to detail refers to unambiguous and comprehensive guidance when documenting what is needed to setup. This means documenting every little thing that a developer needs to do in order to start writing code and integrating with the rest of the team’s work. Here’s what I capture in the environment wiki page:
- List of software packages to install: In my case, this section would capture the Java Developer Kit (JDK), the Eclipse integrated development environment (IDE), Apache Ant, Apache Axis, and SQL Server Management Express.
- For each package include location (network drive/Internet/Intranet/Other) and credentials necessary. E.g. for apache ant, the location would be our subversion repository. The relative path is specified from subversion working copy folder - <svn-home>/Software/Apache/Ant/1.7.0.
- For each package capture the system and local variables need to be configured on a machine. For instance, ant requires the ANT_HOME variable and axis2 requires the AXIS2_HOME environment variables to be set with values pointing to the folder structure on the development machine.
- List of additional libraries to obtain – these include any java archives (JARs), or .NET DLL files, or others. An example of such a library would be java database connectivity (JDBC) JARs for accessing Microsoft SQL Server 2005 or JARs for working with IBM Websphere MQ.
- How to get user access to queue manager, database server, and remote machines – contact person as well as link to relevant procedures and forms. Details such as application credentials in the development environment or user specific credential forms can be specified here. For instance, I specify an email template with login user name, our team’s application identifier, and a contact person name to be sent to our middleware support group for access to the queue manager.
- How is the source code organized? How to get access to the source code repository? This section would provide a summary of the code organization. For example, I organize code based on data domain (customer data, account data, document data) as well as core reusable utilities (e.g. logger, router, exception handler, notifications manager etc.). This section also provides the location to the subversion trunk as well as additional instructions on getting write access to the repository.
- Setting up working copy (or local developer copy) of code from source code control. This section provides instructions on working copy location based on our enterprise desktop policies. For instance, a particular folder has write access while users don’t have rights on other folders.
- Location of key files – application log files, error files, server trace logs, thread dumps. Examples in this section include file path location to the Tomcat servlet container log and Websphere MQ bindings files.
- Browsing queues and procedure for adding queues. This section will point out the salient queues that a developer should be aware of in our queue manager. It will also provide naming conventions as well as support information for creating new queues.
- Browsing tables and creating database objects such as tables, views, and stored procedures. In my case, this section points to generated database documentation to our SQL Server 2005 database using SchemaSpy.
- Scripts/utilities used by developers – developer tools that automate routine tasks. Examples here include apache ant scripts that compile and execute JUnit test suites, as well as those that generate javadocs based on java source code.
Perhaps the most important aspect of setting up developer environment is automation. Automation has several advantages – it can significantly reduce if not eliminate errors with configuration by providing a consistency to the entire process. Here are some challenges with automation:
- Lack of administrative rights on their machine: system administrators might not provide developers with rights on the development machines for security reasons and enterprise policies. This might prevent software installations, setting up of environment variables, and execution of scripts.
- Need for coordination with external groups: external groups might provide installation software, provision credentials, or approve requests for installations.
- Need to leverage hosted solutions: large enterprises have dedicated shared hosting for middleware capabilities (message queuing, enterprise service bus, application adapters) and interacting with them often require support tickets.
Addressing these challenges may not always be feasible but here are some of ideas to address them. For all software that is needed for a developer – regardless of where they are obtained from – add it to the source code control system. Make sure to organize the software packages using a consistent naming and location convention indicating version numbers and library dependencies. This will ensure that every developer can at get to the software packages without running around contacting several people across your firm. For software packages requiring installation from external groups create scripts that will generate tickets or if possible even submit tickets programmatically. Once they get a working copy of files from source code control you will want to create a “setup” script. In my case, this is an apache ant script that sets up user level environment variables, creates a user id/password in our development database, copies licensing keys from network folders to user’s windows profile, and generates sample XML messages for a variety of web services that the team supports.generatesgenerates
Automated Testing and Continuous Integration
Developers don’t automate testing for a variety of technical reasons. I often hear variations of below:
- “I have a tool that I prefer using” – this could be a tool that the developer created or one from a particular open-source or vendor suite. The tool might have limitations that the developer refuses to acknowledge or address.
- “Don’t have test data” – especially when coordinating data that needs to be correlated across multiple systems/processes.
The tool issue can be addressed in a variety of ways. You can show them the benefits of using a standard testing tool such as JUnit or NUnit pointing out the benefits of integrating it with a script and the ability to run tests in a consistent and repeatable fashion. In a large enterprise it is unlikely that all your developers are going to use a single tool. It is more realistic to have at least a standard toolset for development teams within a department. What I do is provide standard test automation scripts – scripts that compile JUnit test cases, execute them, create reports, and emails results – as part of every developer’s environment. When a developer downloads a working copy from our source code control system the test script is in place and they keep adding individual test cases. Our team has a standard test folder structure for placing test cases and test suites for the script to pickup and execute. Additionally, during code reviews I make sure that not only the code but test code is reviewed as well. As part of the review, test cases that aren’t in the suite of automated tests are refactored. This also applies to test code that contains hard-coded data that really needs to be driven off a configuration file or data from a database. For example, a test method specifies the user role to be BRANCH_SUPERVISOR instead of obtaining from a configuration file. Upon review, this property was added as a name-value pair in a properties file and the test method was refactored to access the user role from the file. Now, this property is available for additional tests.
In SOA projects where services integrate with application servers, legacy systems, data sources, or packaged applications, a battery of system tests are essential. These tests can initially be executed with mock objects that stand in for these backend integrations. Eventually though you will want to get tests to run against real integration points. In such case, test data and data that ties across multiple systems becomes extremely critical. Lack of quality test data, specifically data that needs to correlate across systems is a big reason for ineffective automated tests. Depending on the complexity of the data and the number of systems in play you can tackle this in a phased manner. You can start with a local copy of the data where your developers query multiple data sources and populate the local store. This might sound like a very manual exercise but if the data is new and ETL jobs/batch processes to populate the data store aren’t completed this could be a simpler way to start. Over time you can build a suite of classes that encapsulate test data from multiple data stores, ensure validity/quality of data, and also populate test data as part of your continuous integration script. Regardless, if you decouple your test case code from the specifics of obtaining test data it will be easier to change the sourcing strategy and not break the test code. My team initially used properties files to feed test data and over time migrated to a set of data access classes to obtain needed test data. For instance, we used the Data Access Object design pattern to create a set of java classes that encapsulate operations on customer data and account data entities. These classes provide a simple interface with CRUD methods that a JUnit test method can access. If a test method needed to get customer data it would import the DAO factory and the specific DAO class and invoke the getCustomer() method. Upon fetching a particular customer object the test can proceed with the rest of the logic. The test is freed from knowing how the data was queried and packaged into an object and these interfaces become reusable across tests.
Defining What Done Means
In an enterprise setting specifying the criteria for specifying the criteria for done can be very tricky. Code that executes perfectly fine in your development environment may not work in a test environment. This could be because of security tokens/policies, network/connectivity issues, lack of system stability, or due to higher quality testing from a quality assurance (QA) team or end-users. In order to minimize the surprises when migrating code, your criteria should include:
- Creation of a comprehensive set of test suites that are repeatable and minimize (if not eliminate) hard coded data values.
- Addition of test suites to your continuous integration process. For instance, in my team this means adding JUnit tests suites to a specified folder that an apache ant script can pick up and execute. This ant script is in turn invoked by the CruiseControl continuous integration server.
- Automated test cases not only for functionality but also performance (e.g. using tools such as JUnitPerf for automating test scenarios using performance thresholds).
- Get extensive test data. In case of tests that integrate with legacy systems have your legacy teams work with you on creating tests and defining test criteria. Your developers may not always know the nuances of working with legacy systems. A few sunny day scenarios tested in their development environment are unlikely to reflect the data in the production environment.
- Code Reviews not only with the internal team but also infrastructure teams (e.g. DBA, System Administrators).
This article touched upon a few of the challenges to adopting agile methods within the enterprise and provided strategies for addressing them. Set up development environments in a consistent fashion using automated scripts and checklists, facilitate automated testing and continuous integration by using standard tooling and test data transparency, and ensure a stricter criteria for Done, Done. These techniques aren’t all encompassing but will help teams be more productive within your enterprise context.
About the Author
I am Vijay Narayanan, a software development team lead building reusable data services and business process automation components working for a financial services firm. I have worked on several software projects ranging from single user systems to large, distributed, multi-user platforms with several services. I blog about Software Reuse at http://softwarereuse.wordpress.com/
Where's the business view in all this?
Where's the process in it that has the business value in mind. Your blog tells me that refactoring is important to be agile. For me it is a tool that helps to cope with change, that is central in agile methods. But, I don't need it to be agile.
Re: Where's the business view in all this?
With regard to your comment on refactoring - yes it is a tool to cope with change and agile methods advocate it as a means to reduce technical debt and maintain/improve the quality of the codebase.
isn't practising TDD a challenge in certain Enterprise?
I am amazed...
Good post but ...
Technical architects have many reasons for telling Agilistos "you ain't comin' in here, boy". Some are covered in the second part of this article (e.g. point around push vs pull).
Don't for one minute let that discourage you from posting more though. I have already sent this link around to a few people.
You gave some suggestions, thank you for sharing.
Re: Good post but ...
Co-Founder and Consultant - TheAgileConsortium
Re: Thank you
Co-Founder and Consultant - TheAgileConsortium
Re: Where's the business view in all this?
- Refactoring helps reduce technical debt
- To continue to respond rapidly to change you need to reduce technical debt
Since rapid response to change is at the core of being Agile you need refactoring to sustain it over the long haul.
Co-Founder and Consultant - TheAgileConsortium
"If a test method needed to get customer data it would import the DAO factory and the specific DAO class and invoke the getCustomer() method" - oh really.
Please - some of your content is very good - that doesn't force a normal quality distribution.
How Can We Use Our Creative Power and Technological Opportunity to Address the Challenges of the 21st Century?
Gyorgyi Galik Feb 26, 2015