Imperva DAM Deployment Best Practices

👆 Search ABOVE for your Database Activity Monitoring Questions👆


Database Activity Monitoring (DAM) Deployment - Customers and partners are needing support in their DAM deployments. Above, is an entire webinar dedicated to best practices on DAM deployments. 

In this resource bundle the experts answer several questions about DAM deployments. We looked into all the Imperva Community discussion questions, and Imperva Community Blogs around DAM and deployment and added a location where you can see the most FAQ's. These questions below, came from some common common support cases, questions asked in community past webinars and more. 

  • 6 Steps to Deploying Imperva DAM  In this blog, Imperva's DAM specialist Craig Burlingame talks about 6 steps on how to deploy Imperva's DAM product. Craig talks about how to Configure site tree,  Discovery and Classification, Define audit policies, Define security policies, Define reports and review. It's a quick and easy read. 

  • Can DAM be integrated with cisco ICE for authentication? Not that I have seen anything in the last 10 years. I have not seen integration with with Cisco ice. My answer at this point is going to be no. But it's something that will look at and see if someone else's find a way to do it outside of Imperva support. 

  • When archiving does it always cause gateway to run out of disk space? No, it does not cause it but if you are not archiving, there is the potential that in a in a high use environment, you would eventually run out of disk space on the gateways. Then what you're hoping is you set up a system event that says if the gateway gets to 80% desk sent me an alert.

  • DAM agent will monitor DB network traffic, so it can learn the password or fetch credit card or any other sensitive details and DAM support engineer can view it. Is there anyway to avoid it? The yes there is the ability to use masking of sensitive data. So in the tool when you set up the server group, you can add data sets or data masking dictionaries, where you can say if it sparks a credit card replace. All of the last four digits with stars will replace all the stars. If you see things like username password replace those with stars. So there is a way to mask the data in the alerting information as well. So the SOC engineers don't see that sensitive or critical data. (In-depth video of Data Risk Analytics)

  • What are the core policies I need to implement to optimize DAM configuration and what is the best practice for maintaining these policies? At a minimum, you will want to create audit policies around privileged user activity, and audit all interactions with specific, sensitive databases that contain your critical data. From a security viewpoint, generally it is a good practice to start with policies like a brute force attack failed login (multiple fails in a short amount of time), and maybe queries that return a large amount of records. Then start with defining what is critical to know about if it happens.

  • What are the key steps in the DAM maturity model, how do I effectively achieve optimum maturity with the product? Maturity can be achieved by continuous monitoring and tuning of the system in order to achieve the best results of the system based on your specific needs. This could be audit, security, or a combination of them. Reviewing the user profiles, alerts, and reports allows you to tune the system so that it provides you the best ROI.

  • How best can small security teams scale their efforts when managing DAM deployments with limited resource? Can WE automate agent deployments to help scale our efforts efficiently? Having a small security team means you will have to be a jack of all trades. Juggling the adding of new databases sources, creating and maintaining policies (security and audit), checking user profiles, handling the discovery and classification of data, and providing the reports requested by the audit, compliance, and management will be almost overwhelming. So learning to use the schedules in SecureSphere to automate as many tasks as possible will help. And creating good relationships with the IT, DBA, networking, and SOC teams will help you to stay on top of changes before they get out of hand.

  • What are some best practice around security policies? This depends. Look at security policies like the TSA at the airports. Stealing their motto of "If you see something, say something", you will want to create policies that provide you with notification of specific events. A few that come to mind are if you a large amount of failed logins coming from a single user, session, or IP address in a short amount of time. This could be indicative of a brute force attack. Another is to watch for queries that return a large number of records. For some, this could be 100 records, but for others, it could 500,000 records. It will all depend on what is normal to your environment. Other policies I have seen include watching for critical databases or tables to be dropped, privilege escalations, and access using application IDs from outside of the servers that host the applications. Read this post (Looking for some tips and trick with tuning DAM security policies within my environment.)

  • When do you use Large Server Cluster or indicators that determine requirement for Large Server Cluster instead of Standard Cluster? Large server clusters can and should be used when you have an extremely large database server that would overwhelm a single gateway with the amount of traffic it generates. By implementing a Large Server Cluster, you will be able to spread out the agent traffic to multiple gateways allowing all of the traffic to be consumed and not worry about lost audit data or spikes in the traffic. Recommended reading: Introducing the DAM Cluster: Large Agent Deployments Made Easy

  • How do you automate DAM Agent Installs? Agent installs can be automated in several ways. First there are silent install script options which are documented in the user guides. Second, there are examples of using tools like Ansible to automate the deployment of agents in the Imperva GitHub repositories. Recommended watching: Which Imperva Agent do I need to Install?

  • In case of multi-tenant database servers how do you filter traffic to be monitored for specific databases only? You can use the match criteria available in both the audit and security policies to narrow down the application of those policies against very specific assets.

  • Is it recommended to classify data before using Imperva DAM? It is certainly a major component of implementing DAM. Without knowing where the sensitive data resides, and what type of data it is, you will have to be more general in what assets you have to create policies for. By narrowing down your focus of what and where the critical data is, you can be more laser focused on what your policies will be applied to.

  • How to identify, if an MX/SOM has reached maximum number of policies that can be created? I don't know of a maximum number but again, in technology, nothing is infinite. You can't have a billion, obviously. So let me do a little investigation on that and try to get back to you on that but I don't know of a known number that comes off the top. I haven't run into that in the field at least but I can certainly try to help investigate that with our product group and give you a general number of the best practices of what a maximum would be for that. One question that I can ask is, there may be some opportunity for consolidation of policies. I'm not sure what the use cases are that are implementing these policies but I would certainly love to work with you on trying to see what it is we're looking to accomplish and why we're creating these policies and maybe help to kind of assess that and see what we can do to maybe optimize some of that. ​If you have a ton of policies in low traffic, then those things kind of balance each other out if you will. 

  • I am new in Imperva. Is there an add-on that can allow me to see the before and after fields on database tables using Imperva? Any modification that's made looking at the previous state we don't really have a show me what was there before I modify this because we see the query as it happens but what we can do is output all of the records that we see and that can be in sequence so we could output that to a SOM or somewhere else that would say, "Hey, here's everything we saw from a modification perspective." If you knew today that the new value is X, tomorrow you see another new value, if we output those in sequence, that would be a way to go do historical reference on what that value was in every value that changed over the course of time. So we don't really have a state machine to keep track of those things before and after, but we can certainly simulate that by outputting those events and giving you a way to do that in sequence with some kind of other storage like a SOM or elastic or something along those lines. ​

Still can't find what your looking for? Login to the Imperva Community

CLICK 👉 HERE 👈 TO ASK YOUR QUESTION.