Search Imperva Community for
Hello Imperva Community –We are looking forward to our Ask Me Anything (AMA) session on Friday, September 18th, 2020 from 10:00 – 11:00 AM CT featuring @Brian Anderson, Director of Technology (Office of the CTO), as well as @Paul Aiuto, Michael Watson, Ofer Breda, Aaron Connell, and @Jim Burtoft (prm) from our Sales Engineering team!Our experts will be answering any and all of your questions around our Database Activity Monitoring product.Event Instructions:
We can certainly support creating signatures, and policies that use signatures typically are crafted to solve something very specific. First, its important to ask why we are creating signature policies, and also consider all of the out of box capabilities. This starts with identifying what is sensitive leveraging data classification monitoring only what is in scope, as we don't want to write everything to disk if it is not needed. Additionally, we can monitor all privileged operations, which can be applied across the board to both DBAs and service accounts.
If there concern about anything additional, there is also a vast list of operations we can use to monitor in the command groups predicate/global object. If we want to ensure that we are capturing all important future events, and that something might not be present in this list, we can simply create a policy that looks for the inverse, if command or operation not equal to any in this list, which will capture any potential outlier.
Aside from those policies and scenarios, if we need to create signature, there are typically 2 parts to a signature. There is a quick pattern match where we specify (part=""), then there is a regular expression that applies a more resource intensive pattern match (rgxp=""). The idea is that the part= will execute first to see if the more expensive regex needs to execute, as this will help to optimize performance and limit the number time the signature executes down to only when it needs to. We can refer to some of the out of the box Setup->Global Objects->Generic dictionary groups for additional examples of syntax as well.
@Robert Miller, Typically there is a certain number of databases that are in scope either because of regulatory compliance requirements, or potentially a security initiative to protect those business critical databases. If regulatory compliance is a requirement, and the organization has that burden of going through the quarterly or annual audit, DAM is very helpful in this process by producing a central holistic single location for all logs across all database service types, and a standardized approach of producing these audit reports making this process easy. This will help to offload repetitive and menial tasks that the DBAs are typically tasked with around generating those reports, enabling them to focus on more strategic and revenue generating activities. DAM can also help to exonerate DBAs, as all activities are already captured on the databases. Once DBAs understand this about DAM, this is a great way to show value to the business and also help to increase adoption on a per business unit or per application basis. Another component here that's really helpful is the use of data classification. We don't want to go watch every channel on TV; we don't want to capture every audit event for every database. What we want to do is refine that down to only what is in scope. We don't want to bloat our infrastructure by writing more to disc than we need to; classification is a great way to refine that. We can either leverage the native data classification in DAM, or ingest data classification from an external 3rd party solution via APIs, or we can leverage both.Once you have that list of tables that you know are in scope where your risk is or what is sensitive, this really helps to drive your policy definition. Then you can go back to the business and say, I know what your sensitive assets are. I also know who's doing what with those and then helping to produce reports for that. Ultimately offloading the remedial repetitive work of generating reports, helping to streamline that and centralize that. And that's how one of the ways we can show value to the business.------------------------------Brian Anderson------------------------------
@Jana Lee, This is a great question and customers would typically ask if they can install third party software like performance monitoring agents on our day which is for that very purpose. We do support SNMP polling but really you're looking for more granular traffic and utilization statistics. We have developed a package on GitHub, which includes performance monitoring scripts for both the MX and the gateway appliances, and can be used for either WAF and/or DAM appliances. Output includes everything from disc consumption, CPU utilization (top, sar, /proc/hades/cpuload) , to memory, and /proc/hades/status counter statistics to offload that to either a New Relic, InfluxDB/Grafana, or a generic uniquely indexed json object output to a SIEM via syslog.
We typically have that run as a cron to execute every minute, so that you can see on an ongoing basis, what all those metrics are. That is the best way, I would say, to get visibility from a performance perspective into your environment.
What is the best way to check the health (current load, free space, backup options, alarm notification, etc) of the DAM MX/GW/SOM/DRA/EX appliances servers?
Hi @CJ Kuo,It is technically possible to offload the remote database traffic (over the network) to a sniffing gateway appliance, where the agent is only responsible for capturing the local db traffic. But, the industry has run into issues in general inspecting out-of-line traffic the more that perfect secrecy cyphers are used (EDHCE, diffie hellman, etc). This traffic can not be decrypted out of line in a sniffing deployment, where as the agent can decrypt it as it is local to the machine with access to the required certificates. Aside from traffic decryption, customers have elected deploy as agent only with gateway clustering for the ease of management and the simplicity. For all DAM deployments, it is almost always recommended to use gateway clustering, especially if the amount of traffic exceeds the capacity of a single gateway. If you have to have multiple gateways, you reduce cost of redundancy with the N+1 model that gateway clustering introduces. Also, gateway clustering maintains the awareness of sessions across all gateways in that cluster, so if you lose a gateway, visibility to the existing user sessions to the database is maintained.
Hello, what is the timeline for the no longer needing to register the Imperva public key in order to deploy an agent? we heard that their will be a change from Kernel space to User space deployment. Is their a timeline for this ?
General question regarding the User space Agent:
or Contact Us
Copyright @ 2019 Imperva. All rights reserved