Imperva Cyber Community

Expand all | Collapse all

Database Activity Monitoring (DAM) Ask Me Anything Session

  • 1.  Database Activity Monitoring (DAM) Ask Me Anything Session

    Community Manager
    Posted 14 days ago

    Hello Imperva Community –

    We are looking forward to our Ask Me Anything (AMA) session on Friday, September 18th, 2020 from 10:00 – 11:00 AM CT featuring @Brian Anderson, Director of Technology (Office of the CTO), as well as @Paul Aiuto, Michael Watson, Ofer Breda, Aaron Connell, and @Jim Burtoft from our Sales Engineering team!

    Our experts will be answering any and all of your questions around our Database Activity Monitoring product.

    Event Instructions:

    1. If you are able to attend the event live, RSVP here and join our webinar session!
    2. Reply directly to this thread with your questions and an expert will reply to all questions received starting at 10:00 AM CT this Friday.
    3. Use @mentions when responding to a specific expert.

    Please reach out to me, your community manager, with questions or for help at communitymanager@imperva.com.

    If you are unable to make it during the time of the event, post your question to this thread and we will be sure it receives an expert response this Friday! Make sure to check back here following the session to see all of the amazing questions asked by your peers and the responses from our experts.
    ​​​​​​
    #DatabaseActivityMonitoring
    #AllImperva

    ------------------------------
    Christopher Detzel
    Community Manager
    Imperva
    ------------------------------


  • 2.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Impervian
    Posted 13 days ago
    Any tips for creating signatures? I configure all the time "do not audit signature" and always struggle I might not audit important future events.

    ------------------------------
    Sabajete Elezaj
    SNT Albania
    ------------------------------



  • 3.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @Sabajete Elezaj

    We typically say don't go create arbitrary signatures because there's a lot of out of the box configurations that we already have at your disposal. If we do want to write something custom like that, it would be fine because you've already implemented all of the other components that we have. And by that, I mean we're looking for select statements against sensitive data. We're looking for privileged operations that are already present in command groups and privilege operation groups.

    If you are already monitoring those things for those specific users and you're interested in anything that's outside of that, it would be possible to create a policy to say, if my privilege operation doesn't equal one of these, then I can audit that. And that captures anything else that may come as far as future commands that deviate from what's already known. That would be one suggestion that I can make off of that. If there's something specific that you're looking forward, there's two parts to a signature certificate, there's a match and then there's a pattern. ​

    The first part of it is from a reject implementation perspective. Do I see this pattern there first, then I can apply my more complex signature to that from a performance perspective. We don't want to only create a signature that could potentially be expensive and resource intensive to look at all traffic for. When you are creating signatures like that, are there certain things about this particular command that you know are going to be present that you can do a quick check for before apply the more extensive, broad expensive signature to that and then only apply that to the parts of the database or the specific users that you may be looking for that activity to originate from so that we're not applying a signature to all traffic that comes into the database from that perspective.

    ------------------------------
    Brian Anderson
    ------------------------------


  • 4.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago


    We can certainly support creating signatures, and policies that use signatures typically are crafted to solve something very specific. First, its important to ask why we are creating signature policies, and also consider all of the out of box capabilities.  This starts with identifying what is sensitive leveraging data classification monitoring only what is in scope, as we don't want to write everything to disk if it is not needed.  Additionally, we can monitor all privileged operations, which can be applied across the board to both DBAs and service accounts. 

    If there concern about anything additional, there is also a vast list of operations we can use to monitor in the command groups predicate/global object.  If we want to ensure that we are capturing all important future events, and that something might not be present in this list, we can simply create a policy that looks for the inverse, if command or operation not equal to any in this list, which will capture any potential outlier.  

    Aside from those policies and scenarios, if we need to create signature, there are typically 2 parts to a signature.  There is a quick pattern match where we specify (part=""), then there is a regular expression that applies a more resource intensive pattern match (rgxp="").  The idea is that the part= will execute first to see if the more expensive regex needs to execute, as this will help to optimize performance and limit the number time the signature executes down to only when it needs to. We can refer to some of the out of the box Setup->Global Objects->Generic dictionary groups for additional examples of syntax as well.



    ------------------------------
    Brian Anderson
    ------------------------------



  • 5.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Impervian
    Posted 13 days ago
    One problem that I've run into is that DAM is deployed without really understanding what is necessary to effectively use the product and how to handle the information that gets generated. The appliances are easy to configure and deploy, but we are faced with trying to get involvement from other groups to make effective use of DAM. Any suggestions?

    ------------------------------
    Robert Miller
    Bank of the West
    Omaha NE
    ------------------------------



  • 6.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 10 days ago

    @Robert Miller

    Typically there is a certain number of databases that are in scope either because of regulatory compliance requirements, or potentially a security initiative to protect those business critical databases. If regulatory compliance is a requirement, and the organization has that burden of going through the quarterly or annual audit, DAM is very helpful in this process by producing a central holistic single location for all logs across all database service types, and a standardized approach of producing these audit reports making this process easy. This will help to offload repetitive and menial tasks that the DBAs are typically tasked with around generating those reports, enabling them to focus on more strategic and revenue generating activities. DAM can also help to exonerate DBAs, as all activities are already captured on the databases.  Once DBAs understand this about DAM, this is a great way to show value to the business and also help to increase adoption on a per business unit or per application basis. ​

    Another component here that's really helpful is the use of data classification. We don't want to go watch every channel on TV; we don't want to capture every audit event for every database. What we want to do is refine that down to only what is in scope. We don't want to bloat our infrastructure by writing more to disc than we need to; classification is a great way to refine that.  We can either leverage the native data classification in DAM, or ingest data classification from an external 3rd party solution via APIs, or we can leverage both.

    Once you have that list of tables that you know are in scope where your risk is or what is sensitive, this really helps to drive your policy definition. Then you can go back to the business and say, I know what your sensitive assets are. I also know who's doing what with those and then helping to produce reports for that. Ultimately offloading the remedial repetitive work of generating reports, helping to streamline that and centralize that. And that's how one of the ways we can show value to the business.

    ------------------------------
    Brian Anderson
    ------------------------------



  • 7.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 10 days ago
    @Robert Miller

    This is a question we get all the time with DAM because we've got a lot of functionality in DAM that not everybody is using. And so I want to really encourage everybody to dig into, look at what you're using now but check out those other tabs, because there's a lot of things, whether it's the sensitive data scanning that Brian mentioned, which is useful for you from a policy perspective, but it can also be useful for the compliance team, for their CCPA compliance, the vulnerability scans, where we can look to see whether certain patches or certain configurations are applied on the database can be great for the standards team that is responsible for making sure that databases are deployed with a certain configuration. And you have that ability to kind of go to these other departments and say, "Hey, look, we already have a tool that does this. You can use it." ​

    ------------------------------
    Jim Burtoft
    Imperva
    PA
    ------------------------------



  • 8.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 10 days ago
    We can help ingest those change control IDs and we can help enrich your data, set anything that is run against that database, validate that A, that it's a valid ticket and B, we can help enrich all the queries with that. So it becomes a very, very simple, easy report. And lastly, if you try to perform coverage operation against the database and don't have a valid ticket, we could generate an alert saying, "Hey, we saw somebody try to do something that wasn't associated with a ticket."

    Or we can even go back into that change control system like service now, remedy or whatever it is and update those change control records with the actual queries programmatically. So if you have a database, DBA login runs a bunch of queries that will ultimately end up back in the change control records so you don't have to go do this back and forth. That's a really, really valuable operational integration. It's very, very straightforward to set up, but the business use a huge optimization and process and it helps save a lot of time for all people involved.

    To set up those configurations if you go to github.com/imperva, there's lots of projects that are on there that are open source packages that we can help to integrate with stuff, whether it be integrating with Ansible or shape up Puppet to deploy all of your agents in automated fashion, or write integrations with service now, or have some kind of automated tasks to ingest data or push data somewhere, we have lots of different solutions that are posted up there. I've authored several of them. The team here has contributed to them as well, but that would be kind of where we'd point as far as documentation of resources for how to implement that. And then of course we always have services that can help you from an implementation perspective to put those things in place as well.

    ------------------------------
    Brian Anderson
    ------------------------------



  • 9.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @Robert Miller

    I want to piggy back on that one more so one level higher also operationalizing the information network I think is very important as well. So sending it over to your SOC, your NOC, right? How can we refine that data so they can see what they need to see for forensic investigation?

    You can leverage the traditional security through database monitoring, but also leveraging a solution that sits on top data which is kind of a mix. Where we're sort of refining the millions of events that are coming in to more narrative contextual information that really allows the SOC to know what is going on in the environment and essentially how to react to that accordingly.

    ------------------------------
    Paul Aiuto
    ------------------------------


  • 10.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Posted 13 days ago
    What is the best way to keep an eye on the gateways to make sure they are not missing audit data or becoming overloaded via CLI?​

    ------------------------------
    Jana Lee
    Cybersecurity Engineer/DBA
    Regions Bank
    ------------------------------



  • 11.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 9 days ago

    @Jana Lee

    This is a great question and customers would typically ask if they can install third party software like performance monitoring agents on our day which is for that very purpose. We do support SNMP polling but really you're looking for more granular traffic and utilization statistics. We have developed a package on GitHub, which includes performance monitoring scripts for both the MX and the gateway appliances, and can be used for either WAF and/or DAM appliances. Output includes everything from disc consumption, CPU utilization (top, sar, /proc/hades/cpuload) , to memory, and /proc/hades/status counter statistics to offload that to either a New Relic, InfluxDB/Grafana, or a generic uniquely indexed json object output to a SIEM via syslog.

    We typically have that run as a cron to execute every minute, so that you can see on an ongoing basis, what all those metrics are. That is  the best way, I would say, to get visibility from a performance perspective into your environment. ​


    https://github.com/imperva/mx-toolbox/tree/master/performance-monitoring




    ------------------------------
    Brian Anderson
    ------------------------------



  • 12.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Posted 13 days ago
    In the future, all DAM hardware calculation methods will be changed from traffic to IPU.

    However, the interpretation of IPU in Imperva Hardware Appliances is "The Imperva Performance Units (IPU) is a proprietary metric that represents the maximum recommended load on a gateway, and is a number used for sizing and load balancing Imperva appliances in a given deployment. IPU is computed based on a number of metrics including, but not limited to, the total number of database servers, the total number dB cores, and type of dB used in the deployment"

    How can I estimate or tell the customer that it has been overloading when any value is confirmed?

    ------------------------------
    CJ Kuo
    Ciphertech
    Taipei
    ------------------------------



  • 13.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @CJ Kuo

    I was taking a look at this and basically there's questions about how do we size an environment? There is a notion of a transaction per second versus Imperva performance unit, which is kind of just a general term that we would use to size database traffic. There isn't really an easy answer, or silver bullet that can apply universally to every single environment in the same way. By that, I mean, no two databases are designed or utilized in the same way. The schema, the underlying data and the audit requirements for each database server will be are different. If we initially monitor a database and only look at a few privilege operations, versus if we look at that exact same database monitoring for excessive record access like SELECT operations against all your sensitive data tables, and/or a variety of other db modifications or privileged operations, the sizing requirements here are a moving target based on a variety of factors. ​

    The best answer is let's monitor the appliances and look at what the utilization of that is. If you have an increase or spike in traffic on those databases, at some point it would make sense to increase the resources that it would take to monitor them. So that's kind of a moving target both on the customer side as well as on the database activity monitoring side depending on what we are monitoring; as we apply more policies, more resources are required to monitor, and store, and manage this data.

    As a general statement for sizing, historically what we've used is 100 transactions (or 100 IPUs) per second per core, if we knew nothing about the environment.  And this is just an average of what we have seen typically work as a starting point.  But, as mentioned above, this may change over time as traffic and utilization increases. The best thing to do is to implement performance monitoring and keep track of how our gateways are being utilized and what the capacity of those are and as you need to increase, that would be one recommendation of justification to do.

    This is one of the reasons that Imperva has moved to a flex model of pricing, and stopped selling the perpetual licenses. What is the number of database servers that we need to monitor? However many gateways you need to do so, you can deploy as many as you need, and because of this moving target, you can increase the number of gateways in a deployment as utilization increases or the environment calls for it.

    ------------------------------
    Brian Anderson
    ------------------------------


  • 14.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Posted 13 days ago

    What is the best way to check the health (current load, free space, backup options, alarm notification, etc) of the DAM MX/GW/SOM/DRA/EX appliances servers?



    ------------------------------
    Mayuranathan Palanichamy
    IHIS
    Singapore
    ------------------------------



  • 15.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @Mayuranathan Palanichamy

    There are a couple things here. Let me go to back to the performance monitoring packages that are referenced on Github is kind of one really good practice that will out put all this information and dump it into a time series database or something that is designed to performance, monitor visualization stuff. So either JSON or InfluxDB, Grafana or New Relic are things that we have seen requirements for in the past.

    Secondarily, you can also create system events that if there is a load that's been exceeded or if there's a disconnection from the MX to the gateway or the gateway to the agent, you can have events that are generated in your environment that you could then go create tickets for, get alerts for, send emails for. So that's something that's helpful just from a visibility of notification. ​

    But then performance monitoring is definitely something that would be helpful to give visibility to the rest of those components. That applies to the MX and the gateway. The MX script should work exactly the same for SOM. For DRA you don't really have the same challenge because it's not really looking at live traffic coming in. It doesn't have the responsibility of potentially remediating that traffic. It's more of an offline analytics server of you certainly can monitor metrics on that as well, but this applies more to MX, gateway and SOM.

    ------------------------------
    Brian Anderson
    ------------------------------


  • 16.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Impervian
    Posted 13 days ago
    Can you talk about things like reporting, saving reports to shared locations, examples of a few reports that show value to management, etc. E.G. if you could come up with a report that says how many blocks and non-blocked events occur in an on-premises WAF or DBF/DAM environment, make it look pretty and/or throw in some information such as how many SQLi were stopped in that report, then send it out to a shared directory/site for consumption that managers could login to and see the high level that matters to them most, it helps keep funding/get funding for these appliances and initiatives. Can you talk more about that? Maybe a specific webinar around this area would be helpful as well. 

    ------------------------------
    Jason Park
    County of Los Angeles
    CA
    ------------------------------



  • 17.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @Jason Park

    First,  if you do generate a report, you can create an action set and the action set can be responsible for copying that report to somewhere and that could be a shared drive location. It's very common to have a one shared drive or one now as a stand or whatever it is you drop these reports on for historical reference, and you can schedule these reports Monday morning, you get your previous week's reports or whatever it is as often as you'd like.

    That is one underlying mechanism that it does make sense to output those to a particular location. And the action set is what we would typically use to do that. Second part around this is what kind of reports are useful? In working with customers in the past what I've seen is there's a handful of reports that are really useful. ​

    Reporting on privilege operations, looking at what your users are doing in the environment and seeing what that looks like, are you modifying something? What's happening? And that could be then given to each one of the business units or each one of the application owners to help them get visibility into what's happening in their environment.

    That is one aspect to it. If you have excessive failed logins, attempts to log into the database, if there's SQL exceptions, those are the things that they're giving visibility to maybe your top queries, most expensive queries, given that back to the business unit as well. Those are things that have been helpful from a reporting perspective. 

    As a general statement like our web application firewall product, @Jim Burtoft and I were talking about this as well, it blocks a lot of stuff because half the traffic on the internet is automated and a good chunk of that is malicious. So you're always going to be blocking a lot of things on the web application firewall side, so noise. The database activity monitoring world doesn't operate in the same way. You do not always see a bunch of malicious traffic every day. But you do want to find nefarious things. You do want to find stuff that deviates from the norm and you also want to prevent or detect things that should not be happening. For example, dropping a table on a production box might be an example of something that you'd want to block, but reporting on things that should not be happening is really how you show value in database activity monitoring, looking at sensitive assets and looking at things that should or shouldn't be happening around privileges or access.

    If you have our database or risk analytics or data risk analytics product, that can really help pinpoint this user did something and this is an actionable explanation of what that user did that was a deviation from normal traffic. That's really helpful to deliver to management saying we invested a billion events and I have seven things and that refines down instead of dumping everything to the SOC or having to run reports constantly, this just gives you that actionable bite-size visibility into what you should be paying attention to. Apart from that, looking at specific user activity and then looking at excessive record access to sensitive data, operations that are performed around that sensitive data are general useful reports that we've seen for customer environments there.


    ------------------------------
    Brian Anderson
    ------------------------------



  • 18.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Impervian
    Posted 12 days ago
    @Christopher Detzel

    Thank you for this opportunity. My first question is how long is enough to run your policies in monitoring mode? Secondly, Is it really necessary to turn ON BLOCK MODE when DBA's are login to the database with SYSDBA role? To optimize the usage of our DAM, how do we plan for the switching from MONITORING to BLOCK mode? Its so sad I will be missing out on this event.  Will be back to read through insightful questions from the Impervians on this thread​

    ------------------------------
    Adesola Jolaoso
    CBN
    Abuja
    ------------------------------



  • 19.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Impervian
    Posted 11 days ago
    Hi Adesola,

    I am a customer to Imperva DAM, would like to help you with some answers to your questions.
    1- You may turn on monitor mode for about a month or two based on your database environment. However, there is no specific time frame given by Imperva to stop monitoring mode and enable Block mode.  
    2- Whether you want to turn on Block mode or exclude DBA IDs from Blocking is purely depends on your Team's/Management decision. In case, if your decision is not to enable Block mode for DBAs, then you may simply set DBA IDs to Alert instead of Block.
    3- To optimize the usage, try to fine tune DAM alerts as much as possible and whitelist accordingly to reduce number of alerts. It will in turn reduces the load on MX/Gateways and then finally turn on Block mode.

    Hope it helps!!.

    @Christopher Detzel
    DAM experts may correct my answers, if I am wrong. 
    Thanks..

    Regards,
    Rakesh


    ------------------------------
    rakesh ch
    ------------------------------



  • 20.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @Adesola Jolaoso

    Congratulations on your Impervian Spotlight! ​To answer this question: 

    Around the topic of monitoring mode versus blocking mode there is a little bit more concern in putting things into a blocking mode on the database side versus on the web application firewall side. The web application firewall side is intended to mitigate a bunch of traffic. You get a bunch of noise there and you're expecting to block that based on what you know about.

    On the database activity monitoring side, there's fear of blocking something that's a production transaction. We want to be selective about what we want to block. A lot of customers would be very selective about to do that successfully, things that we know that we can block or what we would start with. 

    For example, should you ever be dropping a table on a production database? That's something I can confidently say, let's put a policy in place and block or prevent that from happening. Certain operations like I should never escalate user privileges or create new users or delete users on a production box unless it's following a change control process.

    Those sorts of things you can absolutely put into a block mode confidently. As far as other monitoring the block mode, it would really depend on the policy definition, and what you are looking to accomplish. I go back to what is it that we know for a fact that we can block and mitigate against? What is our biggest risk associated with that? That would be modification to schema like dropping tables.

    Secondarily, there may be things that we can easily implement like if it's a PCI type application and we have a credit card table, should there ever be more than one record containing a credit card, leaving that table, at any time. Applications don't typically do a select star against the credit card table. We can look at the number of record leaving certain tables and have policies around that as well to prevent excessive record access.

    Those are things as a general statement that you could put in like X number of records over a certain period of time, that sort of a thing. The last part about thi is there is a question here about when DBAs are all logged in with the database with assist DBA role. So there are two types of users with databases, there is programmatic users that are invoking access from code, maybe applications of scripts. Then there are interactive users, DBAs that log in from their toad or SQL developer or whatever client they have and do different types of things all the time. The programmatic users, those behaviors and those activities are repetitive and they fall into a pattern. It shouldn't really deviate that much because it's code, it's executing and it does the same thing every single time depending on the input parameters. 

    Then the DBAs, you would not be able to put them into that box of saying, "Hey, I'm expecting the same things from you." Because they do different activities all the time, every day.  What we can do is look for deviations from what is known and expected from our service accounts. So if an application user, a service account logs in from an associate or the application servers IP address from the correct client and the access tables that are known, great. This is where the profiling feature can actually come in really useful, I can develop a profile for my service accounts and say, this is what a positive security model looks like. This is what you should be doing.

    If you deviate from this, I want to block that or prevent any deviation from that. So that's another example of a way that you can put something into blocking confidently because you already know what those behaviors and activities should look like. But those are some general examples of how we've seen blocking be useful in production. And then of course you can set policies to a lots of other things that don't necessarily need to be blocking.

    ------------------------------
    Brian Anderson
    ------------------------------



  • 21.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Impervian
    Posted 8 days ago
    @Brian Anderson,

    Thanks a lot. This is quite insightful.​

    ------------------------------
    Adesola Jolaoso
    CBN
    Abuja
    ------------------------------



  • 22.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @Adesola Jolaoso, awesome job on the Impervian Customer Spotlight!  ​

    Here are a couple more things.

    I think it's important to point out that while you can do blocking in DAM, it is not something that most of our customers do for everything. It's not that common to do it across the board because like Brian said, you really need to think about and oftentimes you want to get that buy in from upper management to say, what is it that we want to make sure we block in every circumstance. 

    And like Brian said, there are some things like a drop table on a production database where you can say, yes, we definitely want to do this. A lot of times you can say, okay, nobody should be accessing the database from anywhere other than this list of IP addresses. So block anything other than that, but very rarely do you want to have a block on just kind of a general security policy. You want to be very specific and make sure that everybody knows what you're blocking and why you're blocking it so that it doesn't come back on you.

    Also, when talking about the CIS, DBA role, that's a great example of what Brian was talking about with the tie in with change management, where you can say, "Hey, anything coming from this particular role must have a change management ticket because you should never be logging in with any kind of assist DBA role, unless you've got some kind of a change management setup." And then let's audit that, let's feed it back in and make sure that somebody sees this is what was approved in the change management, these were the commands that were run. Is that what you were expecting?


    ------------------------------
    Jim Burtoft
    Imperva
    PA
    ------------------------------



  • 23.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    Makes sense @Jim Burtoft, as this also opens a different question of like how it's configured. If you have an agent only deployment, which is very common to have clustering of database of gateways and then agents reporting back into that, the agent runs in a particular mode, may be a stiffing mode or an inline mode.

    And really what that means is if I'm going to block something and the way that I can inspect the traffic is let's say that I want to look at the responses like, hey, I want to know the number of records that are leaving.

    The agent will basically look at the request and the response and evaluate that before it makes the determination, if it needs to block it or not. You can change the mode of what the agent runs in based on an agent monitoring rule. So if I say by default, I'm running in a stiffing mode and this is getting a little bit into it here, but I think this is the intention of this call here. I can say for these specific examples, I want to be in inline mode and I want to block these particular examples. ​

    You can tell the agent when in these scenarios, if the operation matches drop table or whatever the case is on these particular databases. In that particular case, I want to be in line and then I want to block for those examples. That's something that I've seen as the best practice using agent monitoring rules to be able to make that happen.

    And also there's a notion of using agent monitoring rules. We haven't discussed this, but maybe excluding traffic, it's not relevant around the whole performance monitoring and optimization discussion. Maybe your ETL jobs that run don't need to be audited every day and you don't want to put 10,000 transactions in your audit just because they run on the database when they're coming from a known place and already authorized. So there's a way we can exclude traffic based on agent monitoring rules as well if it's already known and authorized like that.

    @Jim Burtoft,  not to get a little bit too off topic here about roles and databases, but we do have a user rights management component where we can scan your database and tell you who the user is, how they got that access, what the role is.

    I was working with one customer a few years ago that had a mandate to get rid of all default roles in the database and use their own customer implemented roles. And what they found in that process as we were doing these scans is that users for whatever reason were creating custom roles that had nested default roles that were included in them. So they found this because this role contain that role contain this role.

    And so it's really interesting to see what your underlying schema definition is and what your role definition is and how people were able to get these permissions. And so that's one thing that's very helpful to get visibility into, but then the notion of, if somebody does have a particular role being able to follow it and enforce that we're following that change management process and make that an automated process is very helpful.




    ------------------------------
    Brian Anderson
    ------------------------------



  • 24.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Posted 11 days ago
    Can you walk through a quick example of the benefits of configuring stored procedure groups and the link between having to define both the stored procedure group and the table group in order to identify the actions in the audit data? We would like to leverage this feature but it seems like a lot of overhead if we have to define stored procedure groups and then turn around and build table groups on top of that. I'm not really following how the table group fits into the process.

    ------------------------------
    Jana Lee
    Cybersecurity Engineer/DBA
    Regions Bank
    ------------------------------



  • 25.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @Jana Lee

    Stored procedures,  just for context, is like a function that you can call the does some other stuff under the hood, but you won't see that in audit because all you see an audit is just the function that was called, but it does a bunch of other things.

    We don't really know, generally speaking,  what that function did.  That function could be dropped 25 tables and you just don't know because you don't have visibility to inside of it.

    What we can do is the stored procedure groups will retrieve that stored procedure and tell you what tables this particular function interacts with. It connects the store procedure function to the tables operations that are performed under the hood and there are two things that are required to configure that.

    We need to define our stored procedures. Stored procedure name, the table names and the operations that are defined so we can pull those from the database and help you programmatically define what those are. ​

    Then in order to associate that to our elements in the site tree, we have a site server group service and application based on the different tier in the environment. There are three, four, five, and six and seven. So the application level of the site tree is where you would need to add your service user groups.

    You also have to add a table group there to tell this application what tables you're going to be monitoring. And those two things are complimentary. They need to be both assigned to the application and the site tree. You do have to have both of them there. The value proposition here is that you now have visibility to what your store procedures are actually doing at the database level but we're working to implement an API for our stored procedure groups and we already have one for table groups.

    So at least you can programmatically define table groups and associate those to the side tree. That is one thing that can help offload or optimize that process. But you do have to have both of those. And again, the value proposition here is mapping that general purpose function that you would see in audit down to the operations and the tables and objects that are being accessed when that stored procedure is executed.

    ------------------------------
    Brian Anderson
    ------------------------------



  • 26.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Impervian
    Posted 11 days ago
    What can we suggest in terms of security policy, to a client who doesn't use the baseline feature in DAM, so we can add value to the deployment?

    ------------------------------
    Sabajete Elezaj
    SNT Albania
    ------------------------------



  • 27.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @Sabajete Elezaj

    One thing that we would say is a general purpose is if we know nothing about the databases, let's open up the discussion around classification, because you can certainly look to implement security policies for privileged operations. That's always something that's viable but let's also understand where the sensitive data lives, because if I can create a policy that says, hey, did X amount of records leave my database?

    That is something that I would want to get visibility into. If they are production systems getting visibility into those privilege operations like dropping tables, escalating privileges for users, schema modifications, those things are valuable. Also security policies, audit policies that you can generate for SQL exceptions and security policies for the number of failed login attempts are very useful right out the gate.

    So those are things you can give visibility directly to. If we know nothing about the database, these are general things that you'd always want to see from a security perspective. ​

    ------------------------------
    Brian Anderson
    ------------------------------



  • 28.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Impervian
    Posted 11 days ago
    Hi @Christopher Detzel

    Here are a few Database Activity Monitoring questions that I have.

    1. How to review/which files to view in identifying issues from Tech info Logs (Agent/GW/MX)
    2. When we download alerts in csv format(Action -> save as csv), the error messages reported in those DAM Alerts are not being captured. Whereas when we save alert as pdf (Action -> save as pdf), we are able to retrieve the error messages captured in the DAM alerts. 
    3. Basic investigation steps for common issues in DAM(Agent/MX/Gateways/SOM etc), so that customers can do some investigation at their end, while waiting for Imperva support Team to resolve the issues.
    4. How to identify, if an MX/SOM has reached maximum number of policies that can be created.


    ------------------------------
    rakesh ch
    ------------------------------



  • 29.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @rakesh ch

    There is the get tech info log that you can pull from your MX view gateway that basically brings the entire configuration of the MX and gateway together, including interface configurations, everything that's on the box that tries to put together. That's for the general statement as we give that to support so they can understand what your configuration is, what's going on in the environment.

    Typically, what customers would be interested in in looking for the tech info logs, they typically create a case because they're experiencing some kind of issue they want to investigate something. And so is it performance related? Do we see a massive spike in traffic? Do we have something that we want to get more information about? Instead of going to the GTI logs, I would point more to the performance monitoring packages that we have. ​

    Essentially that gives you visibility to what your CPU consumption number of worker threads that are allocated, all the number of transactions that are being executed on a gateway. We also break that down into server group level. So if you have one very high consuming, high transaction server group versus other ones or not, you can pinpoint that from a traffic consumption perspective, which is an interesting and great way to do internal chargebacks based on utilization within the organization. I would point us more toward the performance monitoring as opposed to trying to break down GTI files which is really more intended for support.

    One thing I can share with you is if there is a particular area that you're looking for because you have experienced something and there's an error that generated and you want to know in more real time, if that error comes up, because something will happen, if that error occurs, our performance monitoring package has an array or a list of files, and then a list of patterns that you can look for.

    Array log messages or the whatever array log file that you're looking to see if that is present, every time this performance monitoring package will run, it could then add if it sees this error to the outbound event that you're sending to your SIM. 

    You can then create triggers to say, "Oh, I saw one, here it is." That's been really helpful in working through escalations with support in the case you're experiencing something with one of the appliances. But I would definitely direct more towards the performance monitoring package as opposed to dissecting the GTI logs, but the GTI logs really you're looking for and it's organized the same way it is on the operating system whereas your network statistics, where's your CPU consumption? What are all the configurations that you have set up? They would typically look at what those configs are the same way that you would look up locally on a box that's the recitation to live.

    ------------------------------
    Brian Anderson
    ------------------------------



  • 30.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Posted 11 days ago
    Considerations to make a gateway cluster in VMware, how to do it, there is a document that explains the implementation more clearly, if you have a more explicit document or diagram, I would greatly appreciate it

    Another question, I want to make some security policy based on certain DML commands but they do not appear in the match criteria, how can it be solved?

    ------------------------------
    alejandro hernandez
    Mexico City
    ------------------------------



  • 31.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @alejandro hernandez

    I would point to the documentation. Essentially, there's two ways to create a cluster and you can either have a single interface or multiple interfaces where the cluster network is separated out to a separate segment and separate niche. And then the traffic that goes to the MX and the traffic that comes from the agents to the gateway, the listeners are on the second niche.

    You can kind of delineate to separate the cluster traffic from the rest of the traffic that the gateway would be connecting with. And that's just kind of from a network topology design, you would have four groups that you would define in VMware that you would have the cluster network be connected to one. And then the traffic coming to and from your MX and also coming from the agents, the listeners that would be on a separate niche. ​

    Those are the two modes are ways to design a cluster. After you deploy that, basically there's a model of gateway 2500, 4500, 6500, that you would deploy your VM. And that has the associated resources for that gateway. So the number of CPUs use, number of memory, whatever that will then spin up and all the gateways that are in our cluster have to be the same model. And the reason for that is, one of them is a standby and then you have N number of active gateways in that cluster. So N plus one, one of them goes down to stand by needs to take over for that one. It needs to be the same model with the same resources that can then take over for that. But that's the general purpose.

    We try to keep all of our gateways that are in a cluster on the same land, very close in proximity. If you do have to fail over to one of them, it's not now failing over to another network, we're failing over to something that introduces some latency or other issues. We try to keep them all together, like they're one cluster or one unit, if you will. But that's a general topology and general design practice that I've seen.

    ------------------------------
    Brian Anderson
    ------------------------------



  • 32.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Posted 11 days ago
    What do you think about monitoring db activity? It is better to use agent or mirror?
    If you use Agent, under what circumstances would you recommend using gateway cluster?

    ------------------------------
    CJ Kuo
    Ciphertech
    Taipei
    ------------------------------



  • 33.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 7 days ago

    Hi @CJ Kuo,

    It is technically possible to offload the remote database traffic (over the network) to a sniffing gateway appliance, where the agent is only responsible for capturing the local db traffic. But, the industry has run into issues in general inspecting out-of-line traffic the more that perfect secrecy cyphers are used (EDHCE, diffie hellman, etc).  This traffic can not be decrypted out of line in a sniffing deployment, where as the agent can decrypt it as it is local to the machine with access to the required certificates. 

    Aside from traffic decryption, customers have elected deploy as agent only with gateway clustering for the ease of management and the simplicity.  For all DAM deployments, it is almost always recommended to use gateway clustering, especially if the amount of traffic exceeds the capacity of a single gateway.  If you have to have multiple gateways, you reduce cost of redundancy with the N+1 model that gateway clustering introduces. Also, gateway clustering maintains the awareness of sessions across all gateways in that cluster, so if you lose a gateway, visibility to the existing user sessions to the database is maintained.​​



    ------------------------------
    Brian Anderson
    ------------------------------



  • 34.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Posted 11 days ago

    Hello, 

    what is the timeline for the no longer needing to register the Imperva public key in order to deploy an agent? we heard that their will be a change from Kernel space to User space deployment. Is their a timeline for this ?

    General question regarding the User space Agent:

    • With the change from Kernel space to User space deployment, is there is going to be an increase in requirements for the agent? (IE: RAM, CPU, Storage Usage on the server where it is deployed)
    • Will we be able to push the new User space Agent Imperva agent via the GUI push?
    • With the new Agent being in the user space, will we need to register the public key if the server has secure boot enabled ?
    • With the new Agent, if the Operating system was updated with the User space Agent installed, we would not need to upgrade the agent for compatibility, can you please confirm ? Also, is this the case for patching event as well ?
    • Will the Imperva MX version 13 be compatible with the User space Agent agent ?


    ------------------------------
    James Lu
    Kaiser Permanente
    Pasadena CA
    ------------------------------



  • 35.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @James Lu

    We are in the process of releasing user space agents and we are doing that in sequence by operating system. And so I think that there's a few of them that are out there. I need to check with the product group to see which ones are available and in what order they're releasing them. But the idea is to come out of the kernel and get into the user space, making our agent code more reusable or more applicable.

    You don't have to have as many agents because you're not tying to the specific kernel anymore. So I can get back to you after I work with our product group on updates on all of the agent coverage that we have and where we are in that process. But that is certainly something is the direction that we're going with our agent design. And we are already in that process. Some of them do work in the user space today and then others are still in the works. So I can get back to you on the community with confirmation on those.

    1. The expectation would be no, that you would not have to allocate more resources on the server to be able to run the agent. It shouldn't be a performance impact that we're experiencing unless... I mean, the expectation is that it would operate the same way.
    2. That should work the same way. We have an installation manager and then we have an agent. And the installation manager is responsible for brokering the version and upgrade process for the agent package itself. So those are two different installers. One of them is you can just push your installation manager out everywhere and then from the GUI itself, then push your agent manager, agent versions from the GUI. So that should be a seamless process with our software update feature set.
    3. That should work the same way. We have an installation manager and then we have an agent. And the installation manager is responsible for brokering the version and upgrade process for the agent package itself. So those are two different installers. One of them is you can just push your installation manager out everywhere and then from the GUI itself, then push your agent manager, agent versions from the GUI. So that should be a seamless process with our software update feature set.
    4. I need to validate that and get back to you unless somebody else on the call knows that specifically. I need to evaluate with our product group and get back to you on that comment there.
    5. Historically if you were to update the operating system, it may change the kernel. So you would have to then upgrade the agent to support that. And that's the entire value proposition of the reason that we're going to the user space agents is that you wouldn't have that same level of dependency. You would be able to run your updates in the operating system and not have to change your agent near as frequently or at all for that matter, because you don't have the same level of dependency on a specific version of kernel.


    ------------------------------
    Brian Anderson
    ------------------------------



  • 36.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Posted 11 days ago
    Hi,

    I am new in Imperva. Is there an add-on that can allow me to see the before and after fields on database tables using imperva?

    ------------------------------
    Lucky Mphelo
    SARB
    Pretoria
    ------------------------------



  • 37.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Imperva Employee
    Posted 8 days ago
    @Lucky Mphelo, welcome to Imperva!

    Any modification that's made looking at the previous state we don't really have a show me what was there before I modify this because we see the query as it happens but what we can do is output all of the records that we see and that can be in sequence so we could output that to a SOM or somewhere else that would say, "Hey, here's everything we saw from a modification perspective."

    If you knew today that the new value is X, tomorrow you see another new value, if we output those in sequence, that would be a way to go do historical reference on what that value was in every value that changed over the course of time. So we don't really have a state machine to keep track of those things before and after, but we can certainly simulate that by outputting those events and giving you a way to do that in sequence with some kind of other storage like a SOM or elastic or something along those lines. ​

    ------------------------------
    Brian Anderson
    ------------------------------



  • 38.  RE: Database Activity Monitoring (DAM) Ask Me Anything Session

    Impervian
    Posted 11 days ago
    How to identify, if an MX/SOM has reached maximum number of policies that can be created.

    ------------------------------
    rakesh ch
    ------------------------------