Welcome!

AppDynamics the World Leader in APM and DevOps

AppDynamics Blog

Subscribe to AppDynamics Blog: eMailAlertsEmail Alerts
Get AppDynamics Blog via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Intel XML, XML Magazine, Enterprise Application Performance

Blog Feed Post

Q&A: How The Container Stores uses AppDynamics for Continuous Testing

image_pdfimage_print

Last week, I had the pleasure to host and co-present with August Azzarello from The Container Store in a webinar titled “How The Container Store uses AppDynamics for Continuous Testing”.

In this webinar, August explained how they have taken application performance management (APM) one step further by embracing APM in their development lifecycle, enabling their team to do continuous testing to catch and resolve issues before customers are impacted.

AppDynamics was being used by the eCommerce team at The Container Store to monitor the performance of their eCommerce applications. After learning the capabilities of AppDynamics in a meeting with the eCommerce team, August started to evaluate AppDynamics for addressing some of the challenges he was having in his testing environment:

  • Set performance expectations to business stakeholders prior to production deployment

  • Expand automated performance testing with efficient reporting and monitoring

  • Reporting on functional tests

  • Test new calls to third party remote services prior to going live

August explained how he addressed these challenges by deploying AppDynamics in his test environment to do continuous testing. He discussed some of the APM functionalities he is using in his testing environment, benefits to his organization and shared some of the best practices based on his learnings. After August, I discussed the process of BizDevOps that utilizes DevOps practices to further drive the overall business agenda and shared 5 keys to success with (Biz)DevOps.

How The Container Store uses AppDynamics in their development lifecycle from AppDynamics

We had a very interactive sessions after the prepared content was presented. August and I answered many questions asked during this Q&A, but, many of these questions remained unanswered. You could listen to the questions and answered that were covered during the webinar at the on-demand recording of the webinar.

August and I responded to all the unanswered questions and here are the top questions and answers from the webinar.

In the test environment, what diagnostics or root-cause analysis tools did you use before AppDynamics and how do they compare?

August: At The Container Store, prior to AppDynamics we just had our plain old logs and Splunk to help parse them. We were limited to what we can get out of the JVM/JMX layer for most metrics, and we usually had to depend on operations to get them. So AppDynamics really empowered us in the test environment.

In past, I have used Zabbix in a test environment. Mainly for low costs, but it was a nightmare to configure and maintain — the amount of time/money we spent trying to get it to be sufficient enough could have easily bought us AppDynamics (in hindsight). And, Zabbix never got us near the detail of information that AppDynamics does. No comparison, I’d fight that battle any day to get AppD in a test environment — the ROI is huge.

Suppose in one Business Transaction (BT) there are multiple requests…Does AppDynamics identify this as single BT or multiple BTs?

August: It depends on how they are configured, and where these requests are. If you could be a little more specific, I could help more precisely. For an example, if a servlet call to /shop makes many requests behind the scene. You see requests to the DB, CDN, and other service tiers. These all show on the one BT for /shop. At the service tiers, you would have separate BTs setup if desired – making those BTs specific to only what occurred at the service layer.

Do you guys use any automation tool for testing at the Container Store?

August: We utilize Selenium w/ Ruby bindings for web functional testing. For performance testing of all web and service layers, we utilize Locust.IO (Python framework). We also utilize a .NET based tool called Ranorex for our store systems point of sale functional automation. Links to the tools we use below.

http://www.seleniumhq.org/

http://locust.io/

http://www.ranorex.com/

Do you use Docker containers? If so how do you find the AppDynamics Docker Monitoring Extension integrates with your testing environment? If not, can the Docker extension identify performance bottlenecks in the contained code? 

August: We have not experimented with this as of yet – but recently started having these conversations, thus why I wanted to respond to this question. I will share when I know more details about how this works — and please let me know if you gain any ground on this subject – sharing our experiences could help us both! :)

Anand: The AppDynamics Docker Monitoring Extension monitors and reports on various metrics, such as: total number of containers, running containers, images, CPU usage, memory usage, network traffic, etc. The AppDynamics Docker monitoring extension gathers metrics from the Docker Remote API, either using Unix Socket or TCP giving you the choice for data collection protocol.

The Docker metrics can be correlated with the metrics from the applications running in the container. For example, the overall performance (calls per minute) of a web server deployed in Docker container can be correlated with Docker performance metrics (Network transmit/receive and CPU usage). You can also set-up health rules with these metrics as well.

It can be used for performance and functional testing in test environment as well like some of the metrics

Is there a way to physically limit/cap the amount of overhead used?

August: We’d need to confirm with a technical rep from AppDynamics, but I do not believe so. I think the only way to control the overhead is by selecting either Production or Development mode. If it helps, I’ve tested out performance with and without AppDynamics agents installed.

AppDynamics says that you can have up to ~2% overhead in production — I’ve found it to be more like 1%. I find it hard to detect any performance hit when in production mode. In development mode, there is more overhead – but we decided it was worth it for having EVERY transaction snapshot, all the data for a test environment. Since we baselines with the system in development mode, we include this in our testing knowing production will have less overhead.

Have you used information points for test? Or is the full BT more useful?

August: Currently, we’ve found full BTs to be more useful. We actually will attach a BT Snapshot export to bugs in our bug tracking system. We don’t really incorporate information points into our test strategy, but now you having me thinking we could explore that. I’ve found that the BTs were more efficient, because we had them set-up anyway – and are so important to the overall AppDynamics strategy being wrapped around business transactions. We do use information points for watching things like conversion, or email tracking — I’ll take another look at information points from a test standpoint and see if we’d add anything — thanks for bringing it up, great question. If you find a useful way to use information points in test – please share!

What would you wish APM could do that it currently doesn’t?

August: From a test only standpoint, I wish we had a better ability to archive reports/information in the most granular resolution of one minute. For snapshots you can manually archive them – otherwise they are only retained 14 days. We end up saving off reports, etc.. on disk for archive processes. I have not tried real hard, but there is no obvious way to do this all in AppDynamics to my knowledge. Graphs go to 10 minute resolution after 4 hours, and 1 hour after 3 days. You still have your information, it just is not as granular as sometimes needed.

How can we trace (compare) the performance latency before and after changes in the build? By profiling the transaction logs?

August: You can verify performance between builds very easily in a number of ways. I’ve found that response times per call are very indicative of performance changes. This can easily be verified on any of the flow maps, dashboards, metric browser or business transaction pages… and done at either the application as a whole, any specific tier – or any individual business transaction or call. We’ve never had to profile any transaction logs to detect performance changes between builds — it has all been done in the GUI.

What is the transaction max limit per tier for AppDynamics? How can we purge the transaction if it exceeds the threshold instead of manually tracking it?

August: For performance reasons, there is a default limit of 50 business transaction per APP AGENT, and 200 business transactions per Application. There is no limit at the tier level. We use catch_all transactions to catch anything that is not defined as a BT, and limit things from going into All Other Traffic. With Business Transaction Lockdown, you can enforce these rules – and not fill up your 200 BTs per application. I do weekly reviews of the catch alls and all other traffic buckets to determine if there is anything new that needs to be promoted to a real BT. I’ve included some links to some documentation below that may help a lot.

https://docs.appdynamics.com/display/PRO14S/Organizing+Traffic+as+Business+Transactions

https://docs.appdynamics.com/display/PRO14S/All+Other+Traffic+Business+Transaction 

Does it have the ability to download to all the transaction snapshots into Excel or similar applications?

August: In the UI, you have the ability to export snapshots in a PDF format. Via the AppDynamics REST API, you can download metrics in XML or JSON format – giving you more flexibility. I’ve found the REST API very easy to use.

Here is a link to REST API information: https://docs.appdynamics.com/display/PRO40/Use+the+AppDynamics+REST+API

Did you evaluate any other tools before selecting AppDynamics?

August: When I started at The Container Store almost 2 yrs ago, AppDynamics was already in place – just not utilized in the facet we do now in a test environment. So I did not evaluate other tools here. At a previous position/company – we did quite a bit of work in Zabbix, mainly because it was a cheap solution. But, I feel the amount of time/effort to configure and maintain drastically outweighs the cost of AppDynamics — and you do not get the same level of information with Zabbix. Zabbix and AppDynamics are not a fair product comparison in my opinion.

Can AppDynamics capture SOAP web message envelopes and store them in snapshots?

Anand: Yes, you could do this by configuring data collectors. Data collectors enable you to supplement your performance monitoring or transaction analytics data with application data.

You can learn more about configuring data collectors at https://docs.appdynamics.com/display/PRO41/Collecting+Application+Data#CollectingApplicationData-ConfiguringaDataCollector

However, you should use judgement about storing application payload data in snapshots. Larger payloads may have performance impacts.

The post Q&A: How The Container Stores uses AppDynamics for Continuous Testing appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.