It’s 12 o’clock, do you know where your applications are?
I hope your data center doesn’t look like this one, but unfortunately in this day and age of SOA and micro-services, the data dependency maps for many of our end-user processes do look like this. Do you truly know all the processes that are dependent upon any given component? When was the last time someone did maintenance on what they thought was an isolated service and cause a major outage? When was the last time you tested your DR recovery plan? Did it go well? All these require a clear understanding of application dependency and a way to map the changes over time.
Cloud migrations, DR recovery groups, Event Impact Analysis, Change Management, to name a few all require clear understand of application dependencies, but the challenge is how to do it and how to maintain it over time. When considering tools in this space here is a list of things to keep in mind.
- Agent or Agentless
- Level of access required
- Method of discovery
- Inclusiveness of discovery
- Method of consumption
- Ability to manipulate results
- Ability to integrate to other tools
- Ultimate goal
Agent or Agentless: Does the solution require endpoint agents deployed or does it operate remotely? If it requires agents, be sure to include the cost and resource requirements in the total cost of ownership. If it’s remote, make sure the tool can collect all the data your really going to need to accomplish your goal.
Access: Does the tool require administrator privileges or elevated user access. This is a risk that must be considered and complicates your least privilege security model. Strive to find tools that use the least level of access possible.
Discovery: What approach does the tool pursue, top down or bottom-up? Top down tools require you to know all the top-level entry points for your infrastructure, which is usually part of the problem, we don’t. Bottom-up tools collect more data and will require more analysis skill to interpret the results. The right answer depends on your ultimate goal.
Inclusiveness: Can the tool discover and monitor all the types of infrastructure you use, i.e. Operating systems, hardware, servers, firewalls, proxies, storage etc. While you want to cover all you have, be careful and do not pay for more than you truly need based on your ultimate goal.
Method of Consumption: How can you consume the data? Does it use a user customizable portal or only printed reports? Are the results static or dynamic?
Manipulation of Results: How can you manipulate the results? Do you have to download it or ETL it to another system or can you filter, manipulate the views to answer your questions. Can you access the low-level detail or only the high level views?
Integration: Can you export/integrate the results with your CMDB, Change Management, DR Management etc. How easy is that process? One click or multiple export/import/transforms. This can be a force multiplier for you and save costs far beyond the price of the tool.
Ultimate Goal: “If you don’t know where your going, all roads will take you there.” The single most important factor is to clearly define what you need to achieve for the application dependency exercise you are considering. What is your scope, subnet, data center, entire enterprise? What specific set of use-cases are you solving for? Just because you can do a thing, does not mean you should.
The tool of choice for our firm for Application Discovery is Risc Networks CloudScape 2.0 It is a bottom-up, agentless process and can produce meaningful results using limited access credentials. The end-user portal is easy to use and powerful visual tools. I encourage you to include it in your list of potential candidates.