Search Imperva Community for
During this AMA the team will be sharing their unique insights on working behind the scenes at Imperva, including discussing how Imperva finds and manages new threats, sharing some of their best practices and tips to securing workloads, and what’s coming on the horizon (that we can share). This webinar will be a mostly open forum where we will be fielding your AppSec questions live. The string of questions and answers can also be found hereTop Support Question on - Certificate issues, onboarding, WAF Gateway and moreResource Bundle - Cloud WAF SSL Certificate Resource Bundle - Cloud WAF Onboarding (Previously Incapusla)Resource Bundle - Imperva WAF Gateway (previously SecureSphere) Common Questions
So, we want to export a list of events from Cloud WAF for internal review, but we don't have a SIEM. Is there any other method or exporting, for example, like to Excel? If not, could this be considered? This will help me relay the value to my technology and leadership team."
I can take that.
The UA team is actually working on a much better reporting tool, which is expected to come in queue in 2020. At this point, we understand that the need is to report to different teams, coordinate and communicate. We do a good job by using the portal, but when it comes to have data outside, there are some formats available today.
I do want to stress that SIEM is the best solution because you can read logs from other sources and can make a more meaningful insight from it. However, from the portal itself, you can definitely do what it comes with the Attack Analytics. Attack Analytics does give you information, they have beautiful insights, which is much helpful in finding you a WAF configuration, which is, much of it is, how do you mature your configuration? That is where the answers come from in the site.
Second thing, you can take Attack Analytics and create a pdf outcome. Now, we understand, and I want to be more open, there's need for scheduling certain reports. There are a weekly report for the account that comes from the system, they have very good insights, but it is just weekly not on demand. So those kinds of things we understand are needed, and the Imperva team is working on that shortcoming in the UI reporting that will come in Q1. But that is exactly what is going to solve the kind of questions that you asked here. What can we do alter the same from the Imperva and can report schedules in data alerts. That is where we are heading in Q1 2021.
A couple of things to add that in there as well. So as Abhishek mentioned, as soon as we can get the data outside the system, the better, because that just opens up a world of opportunity. Sending the data to S3 through the SIM integration, we love that because technically it just makes everything very open and you can use any other query engine that you want to be able to query it. One thing that we are seeing much more commonly, is people dumping the data industry and then using tools like AWS Athena, to be able to actually go and query the data directly. This means you don't actually have to send it to a SIM. But at the same time, I will also say, there's pretty much no reason not to have a SIM at this point, considering you can go and stand up instance of elastic search for free, or very, very inexpensively, or even just pay some for something like Elastic Cloud or there's plenty of other services that have very low cost entry types of services.
And frankly, the value that we see from having it, integrating all of the other solutions, it's worth the time to be able to do it. It's probably what I would say. And Jared, go ahead.
Oh, I was going to say just one other option that is not the most graceful, it would be to use the Incapsula logs downloader to get the raw logs, and then you can do it then what you will from there. So with the Incapsula logs downloader, it can just consume the logs and they'll just rest on whatever device you've used to download them. It doesn't have to transport them to the CIN. So just one of their third option.
One more thing that I figured was the APA. You can use your API to get data. There is Incapsula CLI and some get up tools available. These are very helpful to get data outside the portal. This is very helpful when you want to report ad-hoc, see original locking ad-hoc, you can do it anytime you want. So just use an API and use the API automation. If you are looking for something specific, add those requests on those get up tools, we'll be happy to take them. Also, you can create your own reporting using API. So if you have any tool that takes API and get ingested data to report, it's there for you. So the option with the API, are limitless.
Wow. I mean, that's great answers. So thank you. That's exactly what I was expecting. Next question, in Cloud WAF, there's a DDoS mitigation out of the box, but can you explain the advanced DDoS setting and how to manage threshold request per second threshold, effectively to achieve the best results?
I would start with seeing if the team is not aware of what Advanced DDoS is, please read them, it's available on the https://www.docs.imperva.com site. Frankly, I would not suggest that you depend on the Advance DDoS setting to actually do what is, maybe a high usage of the site. It may also match with some sales activities, and marketing campaign and things like. So what Advance DDoS basically means, in the security world, it is seeing more requests that are generally sees and you have, say for example, default thousand requests per second. And if you're seeing like 400, 500 as usual, and certainly it goes 1000, it says, or it means that something has changed, it does not mean it is actually a DDoS.
What the system is telling you is, we expect you to be more aggressive at this point when the number of requests to the site is more than you expect. And that the term advanced DDoS is actually confused as a security thing, it's actually giving you an alert that the site is much more used than it is expected. Some sites the default value of thousand requests per second is not even possible because they are actually at 30, 40, or maybe 50, 60,000 requests per second. So it depends on what you want to achieve with that value. Second thing is, frankly, I would suggest you'd look into some Incap rules, which is talking about site request rate, which is actually the request per second for five seconds.
So you can actually create a threshold, for example, before it reaches the Advanced DDoS, you can say, "Okay, at, say 600 requests per second, for five a second mean, do something." Say, "Only allow browsers, only allow specific clients that you want to do." Also, you can go a little more aggressive, but lesser aggressive than the Advanced DDoS at, say, site average request rate one minute, which is actually calculating the mean requests per second for a time of one minute. If you find that the average request per second actually has gone higher for more than a one minute, then do something more aggressive. And you can do rate limiting, you can do some bot mitigation and you can do things.
So I would suggest the Advanced DDoS is more of a challenge, which is saying, "Okay, I ... And it is misunderstood that it will challenge every request. It is only challenging our client, which is unknown at the first request. Which basically means, that when the first request comes, the in power does not know whether it is a real browser, because it coming safe via some other forward proxy or coming via the reverse proxy. Or maybe the browser has some extension taking out or not responding towards classification requests. So the advanced readers will only mitigate unknown, it will not be preventing or blocking any browser or known client that is already being able to classify.
So these are all different ways of doing it, it all depends how aggressive you want to be, what is the threshold you want to keep and if you want to keep more aggressive characteristics of blocking before the Advanced DDoS is picked up.
That was great. And before I go to the next question on the community, there is a question on the chat. So Wes asks, "What are the plans to integrate the distill technologies?" He has to leave earlier, so I thought that I'd get that asked.
The connector architecture is everything else that's also migrated now to the Imperva Cloud, which we refer to internally as UMC or unified management council. Longer-Term, the next stage in things that we're looking to do, is to take the distill integrations and effectively the connectors, and also make it so that we can start to integrate that with secure sphere. And so, the goal for us there will be to effectively use secure sphere as a delivery mechanism, to be able to inject all of the relevant distill information and be able to send that up to the unified management console.
So effectively, we're just continuing to go and take all of the remaining assets from distill and just integrating them into the existing cloud web proxies, the unified management console and then also further strengthening the tools and starting to ... There's other work that's going to happen behind the scenes as well, but, effectively, those are the really big ticket items for us.
Cool. Thank you. Any other thoughts around that? Okay, good. And I'm going to ask Chris Richardson to take his stuff off of mute and feel free to ask your questions since it's a little bit more in depth.
Thanks. So I've implemented the Cloud WAF at some very large organizations, and one of the things that I run into is, Citrix Environments, especially with Global Load Balancers. And one of the problems that we're seeing is that, it's not working and Citrix basically we have to have two WAF, our two environments, two cloud environments. Because, we're not able to terminate the traffic at the Global Load Balancer. And I saw this earlier this year and I wanted to see if there was anything being done about that, because then, we have to have two different CDNs, security plus CDN environments. And I'd rather not have a Citric WAF and an Incapsula WAF.
Jared, do you have any thoughts on this one?
Yes. I have seen configurations that help to support this scenario. One thing to keep in mind is that the Cloud WAF is for inspecting HTTP compliant traffic. So as soon as we start talking about VPNs or remote desktop protocol, your chances are, it's not going to work correctly. What I have seen for Citrix is cases where, we still protect the front door, so the login page, but once the user authenticates, they're handed off to a ... It requires an additional feature from the Cloud WAF called the Dedicated Network, where we're able to peel off some of that traffic. And it also requires a little bit of work on the admin side to configure a few settings. But I have seen mentions internally of customer customers being configured this way and networking.
It's definitely not straightforward, so it's a little involved to get it set up, but it can be done. As far as future rollout, that would be a question that we'd have to direct at our RPM as far as, "Are we going to adopt NON-HTTP compliant applications as they do become more common?" Things like, you mentioned Citrix Remote Desktop, especially with a lot of people working remote these days.
All right. Thank you very much.
Again, one thing, I ... Sorry, I didn't heard the question fully son my came in. Sorry. One thing is, when you use the login page to protect over Imperva, you can use ATO. What is the most important thing is protecting you, is the username password and the MFA you're using it. If someone is trying to guess, you'd probably will be able to see from the ATO, is someone is trying to guess username password. Could be accreditation attack. We can give you the data, what usernames they are trying, and you can actually match how relevant they are to your system. So you can understand whether this is a simple scripted, run in a goal, or this is a targeted attack. And then you can understand how your instant response can work.
Knowing what is happening, depends on the data. And if you have the data with the ATO, you know what to do. So sometimes the data really helps you to understand what serial [inaudible] you should pick with these kind of issues. Second thing is coworking a little bit of Citrix, please use your auto Citrix, tell them we need this integration. "When two companies work together, the outcome for customers like yourself would great." So at this point, I think the best thing I would do is, reach out to my Alliance team and say, "Okay, we need to do something with Citrix, they are a great solution, we are a great solution. Let's work together to solve what Chris asks." So that would be a separate setup in internal Imperva.
Good point. I'm going to ask the other question here on the chat. We have logs forwarded from Incapsula to SIM, Tool Splunk. However, we do not see any fields such as, policy name, severity, blocked, or not blocked alerts, et cetera, like Secure Sphere. Do we have future plans to have detailed logs, like Secure Sphere on the Incapsula?
I can take that. Is that question specific to the event coming from Incapsula log?
Do we have the person on the call?
So Vincent, if you want to take yourself off mute, maybe just going to clarify, that'd be helpful. And maybe you can-
Just to answer it on a high level, there is a field called ECT, it has 32 to 40 actions done by proxy. What you're looking to do things is, what is the response from the origin, which is CN1, not depending on the format of the data using. So I'm using, say a simple CF, so CN1 gives you, what is the response from the origin? And it can look for something ACP equal to, it gives you some 30, 40, data requests pass, requests cashed, requests security blog, and things like that. So you can definitely see what the proxy is doing, and you can measure it based on what other filter you want to create. We have seen a lot of customer using that. You can even create a complete story of, what is the latency using the end and start?
So there's a lot of data that comes in the SIM. It may not be the best, but it is good enough to solve, at least, 80% of the questions that we get. What is my site doing? How do I measure that? What is the health check and et cetera, et cetera? So SIM is definitely a best place to look into.
Great. Hopefully that answered Vincent, I know you're having microphone issues.
Jared did also edit the link. Thank you, Jared. That is actually the structure of the logs and it has an example. So look for it, that's the best answer. Thanks, Jared.
Teamwork at its best.
Work makes the dream work.
Exactly. Again, post your questions hopefully on community. I still have some questions here, they were posted in the chat. We want to hear from you. Next question, and I really like this question a lot. But what are some best practices for deploying WAF gateway in AWS?
I can definitely take that one here. So there's a lot of best practices and ultimately, like everything in AWS, it's going to pretty highly be dependent your architecture. And not just the architecture of a single application, it's more of the architecture of your organization and how you structure multiple accounts. But some of the very basic things that we generally look to recommend are, first and foremost, the cloud formation templates that we provide and we automatically generate using the cloud formation tool or the template tool. Those should be thought of, not as written in stone, but more as a guide. And so, we strongly recommend that you adjust the thresholds and modify those to be able to meet your needs. So in terms of scale up scale down and areas that you have around that, we tend to find that works really, really well.
The other thing that I strongly recommend is that, if you're going to be deploying across multiple accounts, ultimately, what you're probably going to want to do is have a security VPC, so to say, that effectively is your primary front door for all of the applications and where you're effectively managing the cluster of gateways together. And then effectively using VPC peering between the two of those. Tend to find them a little bit better, just because, as the number of accounts that you deploy across increases, it becomes very difficult to really manage it at a very large scale. And so, by having a centralized VPC, what you're basically doing is you're creating your own, I guess, almost like a DMC, for better or worse, where you're containing all the WAF gateways itself, with two large scale and max, you have the ability now to actually scale up into, well beyond what we could do in the past in terms of total number of MX's. I'm sorry, and total number of gateways.
And so, this is what we basically would look at as being a logical operation for you to be able to scale your environment accordingly.
Great. Thanks Peter. Anybody else want to add anything? By the way, if you have experiences as well, feel free to add those on the call.
I was just going to say, I second Peter's recommendation about billing a security VPC. It prevents you from getting pigeonholed in the future. Because inevitably what happens is your WAF environment will expand, it may start out as a small project as to cover one particular set of sites or subsets of sites, but then later it will expand and you'll find yourself in a difficult situation at that point, if you don't plan ahead.
That's a good thing.
The other thing too, I just didn't mention it, but, don't be afraid to mix and match between Cloud WAF and WAF Gateway. There are situations definitely where you're going to find that WAF Gateway or Cloud WAF is going to be a lot more flexible in these environments too. Sometimes it's just not possible, you might be deploying an application in mainland China, for instance, and WAF Gateways ... Is a really great for cloud, because effectively you're letting the application teams manage that. But ultimately, mixing and matching is something that we see pretty common. And you just look for the requirements of the projects that you're working with.
And that was part of the rationale why we rolled out flex protect, just because we wanted to make it easy for customers to be able to pick and choose the technology that worked best.
Thanks, Jared, and Peter. Next question which I always like these kinds of questions, what enhancements do we expect to see in the platform in the near future?
By the way, Peter, just quickly, we do have some roadmap items that or calls that we've had in the past, and I'll share those over time and replied to that directly.
So that helps you see what's ... But, sorry, go ahead. I just wanted to make sure that that's out there too.
I'll just give the high level notes. Actually first and foremost, even before I do that, really strongly recommend is that everyone joins the session in two weeks, which I believe is the Project Universe one. Project Universe is effectively one of our big platform revamp projects that we're currently running. It was Abhishek actually mentioned a little bit earlier, we're massively redesigning everything from the home experience and the functionality. Some of the big themes that we're really trying to tackle there is, first and foremost, just better visibility and insight into everything that you get.
So really looking at generally how our users have used the solutions and how they manage in, work with sites at large scale, specifically, going across hundreds of sites, which we have many, many customers that are doing. Also, some of the bigger things that are happening underneath the hood, we're completely re-platforming and re-architecting a lot of the underlying technology that we use for things like data storage. So this is going to allow us to introduce features like a real-time SIM feature, which is effectively giving us the ability to feed events to your SIM, in much more real time. So we've got some delays right now in terms of how the traffic's delivered, our goal right now is to have that delivered in within a couple of minutes, basically. And probably even better in many, many scenarios.
So these are some of the enhancements that are coming. I'm just going to plug the project universe webinar because we're going to be going through everything in depth. You're going to see the new mock ups, you're going to see examples and demos of it. We're super excited. We've been working on this all year and it's going to be a really, really big change to the way that we work.
Thanks, Peter. And I put that in the chat, so if you haven't signed up or RSVP, please do. So any other thoughts from anybody else on that? I know it's a big question. But, we'll get to see a lot of that in a couple of weeks.
And in addition to some of the webinars that we've had around roadmap, I'll share that. We've had a Cloud WAF roadmap webinar, we've had WAF Gateway roadmap webinar. that kind of gives you some of the features and benefits and things like that. So Shamila has a question. Will we see any web app scanner integrated with Cloud WAF?
I can definitely take that one right now. So by and large, probably not officially. If we do any integrations, it would probably be more of an open source integration. And I guess, to explain the rationale behind it. So if we go back many years now, we previously had some integrations with WAF Gateway and some dynamic vulnerability scanners. I think what we generally found though, when we started to actually deploy those in production with a lot of our customers was, by and large, most of the things that the security vulnerability scanners would identify, that could then easily be ported into a rule set, was usually stuff that we were already blocking out of the box. And it was things that were actually already covered by the default policies.
I mean, especially, this is a case with Cloud WAFF where there's a really large subset of default policies that are applied just by having the WAF enabled. And so, it became one of those things where, you would have the scanner integrated and basically the scanner just said, "Just keep doing what you're doing," and there wasn't really that level of granularity or customization. Now, with that being said, there is an opportunity to be able to take findings from more advanced vulnerabilities that get identified. We frequently will work with organizations where they've done things like a manual penetration test and the manual penetration test has found some vulnerability. And then we go in and write a rule to be able to specifically go and mitigate that.
But I tend to find that, those are probably more common for manual penetration tests than they are from scanners itself. And Jared, Abhishek, I know you guys have some experience in this scenario as well, but that's just generally, at least, my thoughts and my experience working with it. So it's not something that's actually honestly, very high on our priority list.
One other thing that we can do is, again, it falls from the same solution. And why? The Tech Analytics can give you the information of the CVE attachment attack. You could under coalesce, if you are able to import the additional data in whatever the form of the data is in the same. And you can correlate of the open risk that you are seeing and the CVE of what is being attacked. Of course, it depends on your configuration, whether are you in block mode or open mode, that is where it helps you to fine tune the services. So, I think from perspective of a customer, it's very important to know, what's the risk? It's very important to know, is it closed? Sometimes it's a race and when you are a race, the time is against you. And that is where you're trying to figure it out, "Okay, there's a new CVE that came out, is it protected?"
And my discussion with the VP of the product management is, they want to create a new RSS, which will give you what the new protection has come up. It is in some phase of development, but, I see Yuzi is here, he is the PM for the Tech Analytics, he is the best who's listening here. So I think we have the right audience. This is the right way of expressing your frustration, and we want to listen to you and make sure it's not natively integrated, but yes, there is a need, and I think it is a very, very good ask. Because you want to see, "What is the risk? What is the priority?"
Of course, don't depend on CVSs and other third-party rating doesn't mean anything because, your system is not Oracle where the attack is Oracle. So have a context, have a tag in your SIM. This is attack for it. So I think the best answer to hear is to have some tagging and to see, how do you live in DUI? The follow-up question, if you can see the CVE within the Cloud WAF, how do we, then, all do virtual patching with Cloud WAF? The best thing with the Cloud WAF is, everything is managed. We have so many good guys working behind the system, that they monitor the internet. They see the threat, they see their tools, they reverse engineer it, and put a thread into it.
I think, as I said, what you're trying to figure out is, am I protected? How do I measure it? We do provide in the logs, a file, depending on your SIM, a file type equal to some value. That value is internal to us. I think it should be opened in some form so you can see, "Okay, this is a CVE and this is what is a protection." So the need is, "Am I protected or not protected?" It's a very big question and need to be answered immediately. You don't have to create a support ticket. Again, I would fall back to Yuzi and Oz, who are the right guys to answer this, because this is a common question of here. The third followup question is, can we map Cloud WAF to MITRE attack and framework?
There has been little focus on MITRE on the VAR application, mostly are on the SAS, mostly are on the Linux and the privilege escalation or the local privilege. Mostly on the systems side, say mobile or the OS. Yes, there is a framework working up, there has been a lot of discussion internally, how do you match them? There is some framework available within the system. It's not the official because, the framework itself is changing, or I would say, is not mature enough for us to do. So we generally, rather than ending up into an attack framework, which is more of a coordination between blue, purple and red team, we are mostly on the offset protection.
I think Peter here has done a great webinars on API and the WAF, and he matches all those OS top 10 protection for Imperva. Those are some of the things that we do. From perspective of MITRE, it is still not dimension for the WAF. So we do have some framework internally but not exposed to the external world.
The one thing on MITRE in particular, I mean, first and foremost, I just want to say, we love the attack framework. I mean, I love the way the rationale behind putting a framework in place to really go and manage and respond to threats, I think it's brilliant. It was long overdue in the industry. We've seen a lot of movement from the SIM vendors, in terms of, making this a core part of their stack and their solution. I know RSA this past year, I mean, you couldn't walk down the hall in RSA without getting slapped in the face with the MITRE attack framework, it was awesome. I was actually really happy to see that. So one of the things that we generally would recommend then is that, if using your SIM is really kind of the common point for dealing with that, is really the core focal point.
Now for us with things like attack analytics, we specifically added the enhancements earlier this year to start exposing more CVE information, as well as being able to link back to the CVEs. That's really where we really see the tie in for application security threats is really tying it back to more of the CVEs or the CWEs in instances where it's not actually a ... It's a custom vulnerability. So but luckily, if there's anything that we're missing, if there's different things that you guys want to see, the one thing I also want to just say out here is, user voice is the way to best message that out. And the reason because, as soon as you post it in user voice, user voice then lets other people effectively up-vote those things. And then, that gets the attention of our product managers and everyone.
And as Abhishek mentioned, one of our product managers for Tech Analytics is on the call right now, which is awesome because he can't pretend like he didn't hear any of this conversation now.
Peter you think that he is ready yet?
Does he want to protect himself right now or say something? No, I'm just joking. So user voice is your way to go in and add a feature requests and those kinds of things. So you just log in, put the URL there. So we do have another question, will there be any developments in this type of WebSocket policy in the future? One. And then two is, does the 14 version of WAF, have any major functions planned to be launched? Jared, I know you're ready for this one.
Yeah. So the first one, I have not seen any mentions of WebSocket enhancements on the roadmap. It doesn't mean they're not there or won't be there, I just haven't seen them myself. For the second one, major changes or features that are coming to version 14, advanced bridge is a big one. If you're familiar with the WAF deployment types, you have sniffing mode KRP which has Kernel Reverse Proxy, and then you have bridge mode. On top of bridge mode, you can also run TRP, which is Transparent, Reverse Proxy, which is a requirement in order to decrypt Diffie-Hellman Ciphers.
So where you can run into some problems is there's limited support in TRP for cipher sets. KRP supports much more ciphers, but it's also an entirely different deployment scenario. So KRP operates at layer three and bridges at layer two. So one of the exciting things is advanced bridge mode, essentially brings the same level of support that KRP offers for ciphers into TRP. So a lot of the cipher negotiation issues will go away and it will be much more straight forward. So today, when you're looking at cipher support and the docs, you have to make sure you're looking at, specifically TRP. So you have to pay attention to those asterisks that you'll see, in the docs page. In the future, essentially if KRP supports it and then TRP will support it as well.
The other things that are coming, and Peter touched on this earlier, was advanced bot integration with secure sphere WAF, as well as HTTP two support. I know that's been around for quite a while, you guys have probably seen it in the UI, it's been in beta for quite some time now. But that's starting to gain traction to have full full-fledged support for that. So those are the big things that are coming for [inaudible 00:34:11].
Two things. To go back to that first question, I did talk to the product manager this morning, CJ. So I tried to do some homework before, so thanks for posting that. You said on the WebSocket piece, we do support Jason and in regards to the version 14, he actually in June, he presented on the roadmap for a WAF Gateway. I will post that and there's a PowerPoint to that as well. But he did say he's going to be updating that PowerPoint here in the near future, to show some of that stuff. So hopefully that will help. Anybody else on that topic? Good. There's another question, can we currently map attack analytics to OSWP top 10?
The OSWAP top 10?
So that's a really good question and follows along with a similar theme earlier. So we're showing CVEs, we're not currently showing any mapping back to the OS top 10. You can roughly group them, but it's more of a manual process. Currently today, we can't show it directly in there. I think that's probably a great enhancement that we should probably make, and I'm just saying that out loud because we've got the product manager on the call. But again, I think it's something we just log it as a user voice and then we can go ahead and prioritize accordingly from there.
But it absolutely makes sense, especially now when we see that it's not just the OS web top 10, but we also have the OSP API top 10 and different classifications of vulnerabilities that are starting to apply now. So it's getting harder, I think, for a lot of organizations to prove that they are secure against those vulnerabilities when auditors and people would ask them specifically about it.
Thanks Peter. Any other questions? So I have one that folks have asked and maybe this is a Jared one, because he's slightly mentioned it. How do you handle DHE ciphers on, on-prem? You're on mute.
If you're in KRP mode, it won't be an issue, if you're in bridge mode, then, you have to enable TRP support. So the good news is that, TRP is just a checkbox to enable, in the UI, there's no reconfiguration of the platform. I see a lot of cases where customers will, they upload the certificate, so they think all is good. "I have the certs, I can decrypt the traffic," but that's not always the case. It also depends upon the cipher set that's in use. So it's important to look at your logs under monitor and alerts and look for any SSL and traceable sessions. Anything related to that means that we can't decrypt that traffic. So we can see it, but we don't know what's inside of it.
So ideally, you would not have any of those types of events in your logs on-prem. If you, do you want to track those down. When you click on those alerts and you look to the right, you'll be able to pick out real quickly if it's using a Diffie-Hellman because it'll have a cipher suite with DAG in the name, and that means you need to enable TRP. However, don't enable TRP haphazardly because it is a true proxy at that point. So we are terminating the connection and re-initiating on the backend. And depending upon what's running through bridge mode ... Kind of along the lines of the question we had earlier. So for example, Citrix, any type of VPN traffic, remote desktop traffic, generally, does not like to be terminated. So don't enable it during the middle of the day, make sure you do your homework and coordinate with your teams before turning that on.
The last important thing before you turn that on is, make sure that the SSL certificates that you have uploaded contain the full trusted chain. That's the other gotcha that'll get you a lot of times. Since we are terminating the traffic at that point, we present the certificate to the client. So if it's missing the chain, the client will get a warning and not trust the certificate. So unfortunately, there's no way currently to see if the certificates that you've already have that chain. So I recommend, it's a little bit more work, but just to be on the safe side, unless you're in a change control window to go ahead and grab a new SSL certificate, confirm it has the chain, and then upload that. Because you can have multiple certificates uploaded simultaneously within the WAF Gateway and then you can choose the new one for your TRP configuration.
What are the top 10 support issues that customers have with WAF? 10 is a lot, so maybe we can ... Maybe the top three or four, would be fair.
The answer here is, there are two things. One, it depends on the deployment what you're trying to protect. Sometimes, it could be specific to the application that you're trying to integrate with. Say for example, Salesforce. So Salesforce probably needs a separate configuration than a usual website because of the host headers past SCC name to the origin. However, the top 10 issues that I ... I would say top three. I would say something it is limited the UI. For example, you cannot change TLS or the ciphers for a site. It has to be done via support. I think we are working on moving them. One of the biggest thing that we have done is moving the backend configuration to UI.
So there is a new delivery option or the cashing option you will see. There are a lot of new features added, which were all backend configuration where support, moving to the UI. So still, they are so much configuration, I don't know many, sometimes it's just new to me, are still there. But that is, I believe is the most top 10. Second is SSL. Sometimes the SSL expectation is, I enable the SSL, it should come right now. We are not the CA provider we integrate with the CA provider, which is a global sign. And they are doing their own process. The process may take up to 24 hours. So we set the expectation, if you're enabling SSL on the Imperva, please ensure that, if you need it right now, use a custom certificate. If you need for Imperva to support non-SNI, set expectation for 24 hours, unless it goes toward extended validation, which requires a phone call and approval of who you are, and why you are using. It's something like that.
So the SSL probably is the ... So the first one is the SSL cipher, the second is SSL certificate, the third, I would say is, the configuration that cannot be done. For example, you want to create Incap rule with rejects, we do not allow the users to use the rejects, because it may be a penalty of performance in our system. We do it via support call. Understand this is a SAS service, it's not on for my solution where you can do everything because you own it and you do it. Because it's SAS service, there's some limitation in how you operate and how we want you to operate. So these are some of the things.
One of the other things that we have seen is, the support tickets, I did not get an alert. Please ensure that you have a proper notification in your system, understand why the different notification that come and go. And if you find that they are not done, raise a feature request, we will make sure that you have ... If there is a need for it, we will definitely add a feature request. But eventually also use SIM because most of the data that you are trying to get as an alert, isn't the same. The same depending on post function that you're using, could have a five to 10 minutes delay. We are working to have a near real-time, which is a very, very good approach for Imperva to help our customers. And you can get more alerts out there. So I would say these are the three, four top ... I would say limitations on the UI or the SSL that currently we are trying to expose, are the support tickets. Also please make sure you have an access to the status page, make sure you know what's happening. And I think that's all.
A quick thought there is, and that's a very interesting question because, in 2018, before I opened up the community, the top things I was looking for at first was, the top support tickets for cloud. From our support cases and everything else, was, SSL certificates. So what I did is, and I posted this in the community now, or on the chat, it's that, when you see that link, you'll see all the support cases and questions that were asked specifically around SSL certificates. So when I say by far, and I wanted the top 10, and this was probably 40 or 50% of the questions for all of 2018 support cases for cloud. For Incapsula and Cloud WAF. So hopefully that helps.
Okay, good. So CJ has another question. Should the root CA certificate be required for the full certificate chain?
Yes. I always recommend it and the vendors freely supply that on their websites. So that's just my personal view. I know I've seen some discussions over, if you're really into optimizing to leave out the route and just the intermediate, because it'll flow up to the root anyway. I personally have just preferred to have the root and intermediate, so that there's no questions about ... Because the last thing you want is for your users to receive a notification that they don't trust the certificate. So I guess I'm kind of old school that way and that I prefer to have all three. The route, the intermediate and the certificate itself. So that's just my recommendation.
I 100% agree with Jared on that one. I've personally, actually from back from my days of being a user of the WAF and WAF Gateway, we ran into this problem and actually saw that, we had some users in some browsers where they didn't have the specific root CA that we were presenting. And so, by including the ... Actually, and it wasn't even a WAF related issue, just an Apache related issue. But, by ensuring that the root CA was there and present with it, that would meant we could actually deliver the full certificate chain. And the one thing that I want to say that this is especially important for is, not just for web applications, but also for API. And the reason being, if you look at it from an API perspective, usually the root CA store that most applications are going to go off of is, it's going to be different than what the browsers are typically looking at.
And so, while browsers might be really good at doing this for APIs, it might be a mixed bag. You have no idea what the client that's implemented the API is actually going to be using. And so, this just provides you with that ability to satisfy the full certificate chains.
Also to verify how the client will react, you can create a site only with server certificate without intermediate CA, and run a [inaudible 00:45:52]. It will tell you what are the intermediate and the root CA certificate that you're missing, and it will tag it, "You require extra download." Now you're at the mercy of the client, that you're expecting the browser will be doing an extra download for you. Of course, the server will chain it to whatever the download using OSCP or whatever. But as Peter said, if you're using a trust store and the client is looking for a root CA verification, especially for the clients certificates, it will say, SSL handshake failure, and there is nothing you can do because you ...
Because most of the code is used as a community, you're not making your own code to change the behavior of the SSL handshake to fail. Of course you can do it, but then again, what is the use of the CA certificate?
Now we do have one question that I didn't get answered yet, but, and I think Abhishek or I think that's probably the best person to answer it. What are the recommended steps for performing a self-service health check of your Cloud WAF deployment?
Very good question. I hear it all the time. And when I ask a customer, "What do you mean by a self-service health check?" They will give me six questions or six more details. I think the focus here should be, how to measure and if there's a deviation, how to alert. So first of all, I think my favorite answer is, review the alert from the same. And same needs data. What do you mean by data is? Because you don't know whether this is good or bad because you don't know the history of the application. Are you expecting client handshake to fail all the time? It could be because it's an API and just reset because of the idle time or whatever the reason is. So have data, data gives you insights and you can create alerting out there.
Review the error code. There is a link on our Cloud WAF, which gives us error/quotes, and it gives you same info. So you can use that to understand, when the error is happening, what is happening? A simple way of seeing is, is to look or filter it, based on the client. You may not know the client, but, there's a snapshot, the client can send it to you. If not, you can definitely see if it is a problem to other customers or not. Simple could be, SSL handshake to the origin, other issue could be, a post timing out because the code just changed and it is having a lower post timeout.
We have six minutes is, actually be timeout and it may need more. It's easy to find what is the health check. Again, it is not a one-step, it is continuous process because the code is also changing all the time. So if you expect that the health check that you are doing good is good after six months, it may not be because you didn't have an API, now you have API client, so it depends. Again, review the action, are you doing a DDoS challenge for API? It will never be able to pass capture. It's a bad action. So always review what you're trying to do an action, feed this back to who the users are and what they can actually do. Now, these are some insights, some recommendation here is, create a proper monitoring for your site. Don't depend on the SIM.
The best we are doing is, again, it is a synthetic testing. You probably need to do a synthetic testing for your site, for a specific application. It could we log in, reset, cut in, cut out, depending on what the application you are. Or re-check, whatever the application is ... The most useful thing, do it with Imperva and without Imperva. If you see a deviation that is latency issue with Imperva or without Imperva, you can catch them. Finding a problem is more important than solving because you can't solve if you don't know what the problem is. So catch the problem. Look for site settings, there are alerts which comes for origin failures. It could be server DC or the application, look for those alerts. Create a specific filter in your email to say, "Okay, when no underscore replied in incapsula.com, sensor requests email." It may be a thousand, but you're looking for server failure. So create maybe a filter on your inbox.
This has to be just like a blog post in itself, because actually you've listed like 10 blog posts, basically.
This is just common things. And there's also a weekly report that comes in for the account, you can see what are the top trends happening. Attack coming in for what site, bot mitigation happening, things like that. Another thing is-
One thing ... I'm sorry, one thing just to jump in here that you didn't mention that I see wrong all the time, and we actually added this into Attack Analytics as an insight is, not locking down your origin servers. This is something that we started running automated checks against it and reporting it back to users with it. But please, please make sure your origin servers are locked in. So what that means is, just putting a firewall rule or a security ACL, or, security group in AWS, to be only limit the traffic to originate from Cloud WAF. There's nothing worse than basically going and doing all of this work and implementing Cloud WAF and everything, and then just basically leaving the back door open. So always check that.
And again, we've made this a lot easier by including this in Attack Analytics, but, that is bar none, I think one of my biggest recommendations for best practices.
Yes, I agree to that Peter. And one of the things some of our customers are doing, they create a alert on the SIM, which is saying, if the request is not Imperva IP, we do publish all over IP. If the request to parallel two is not Imperva IP, what are the action? It is open, then they understand they have to block it. So same again, helps that. Because you're looking for a report, the same gives you the report. That the IP is not Imperva and it is going to your origin for verification. Another way of doing this is automate, leverage API, create incident response playbooks, and use x-ray debugger, because what is happening during the time, is to be troubleshooted. If the problem is resolved and you don't have any logs, the chances are, of course, we will try to figure out our best to solve the problem, but we don't have enough information to solve. So data, again, plays a crucial information of what you need to do at that time. And now it's okay.
So one quick thing I wanted to add on that Peter touched on, if you don't have Attack Analytics, but you should, wink wink, there is a open source tool called Site Protection Viewer Master, that's out on GitHub and it will also do bypass checks. So if you don't have Attack Analytics, you can check out some of the open source tools.
And the best way to run that tool is, you can look in the same and see what is their server port, use all their server port. You can configure those server ports in that tool, and you can run if it is open for one specific application role. So you can customize the port that you want to scan. Again, this is not a TCP check, it's an SVP check. So it is different than the Nmap.
Can I vote for Abhishek doing a future webinar, just talking about all the cool things you can do in a SIEM with all of the data that we feed into it and what he saying? Because, I've learned so many things from you in this presentation that I didn't even know existed.
or Contact Us
Copyright @ 2019 Imperva. All rights reserved