By Robert Meyers  - Partner Solutions Architect, CISM, CDPSE, Fellow of Information Privacy, One Identity & Dan ElderNovacoast Senior Engineer, Linux Services Manager  

The pace of change in IT has always been staggering.  No organization can survive long if they don’t keep up with technologies that increase their productivity. With that comes the critical need to secure data and resources in an increasingly porous environment. The explosion of remote workers has accelerated the blurring of lines between “internal” and “external” such that assets now operate everywhere; potentially with access to everything. This is what keeps your CISO up at night.

This porous environment that business is being conducted in today is the new normal.  Today, the line has blurred to the point that the users, not just data and applications, are distributed.  In the new normal, the world is the new on premise.

There are many pieces to a successful security program that are necessary to mitigate the risk associated with this new normal. Today it is time to focus on one of the less sexy but just as critical pieces which is handling log and event data. Everything generates some type of data stream to capture system events, performance data, or other data that could be useful. The trick is to capture the mountain of distributed data being generated, filter out data that isn’t useful (or that shouldn’t be captured for privacy or security reasons), and then get it to the right consumer to convey the timely and valuable insights that such data can offer. Whether its threat hunting, tracking of malfunctioning devices, or customer purchasing trends, your data is one of your core differentiators that can decrease threat detection time, keep your environment running smoothly, or offer a competitive advantage in your market.

Dealing with all this data can be really painful though. Each component might use a different format, have a different mechanism for transporting it (or have none at all), contain a large amount of useless data, or contain sensitive data that may need to be handled differently. There’s been an explosion in the variety of log data and methods for getting this data from point A to point B which means yet more disparate technologies to purchase, deploy, secure, and manage. The tremendous value of this data is being held back by the complexity of capturing all of it, sorting out what matters, and feeding it to the correct consumer regardless of whether it’s a SIEM, performance monitor, BI, ML/AI, or something else entirely. There’s also often a massive cost to storing and indexing this data though so it’s critical to have a solution that can filter out the signal from the noise.

  • How would you capture the event stream of our new distributed workforce and our rapidly evolving infrastructure and handle all this data?
  • How would you ensure you don’t lose anything in the process and don't have to employ multiple methods of collecting and transporting this data?
  • How do you avoid the redundancy and complexity of having overlapping solutions deployed that then become a management headache that create more problems than they solve?

It turns out, there is an incredibly mature solution based on a protocol that almost all of us have used before. Syslog isn’t new, it’s been around since the ’80s and as a result, it’s a well-established standard that almost everything supports. By itself though it’s limited as the original implementations weren’t designed for today’s needs. This is where a solution like syslog-ng comes in because it offers full support for the standard that everything speaks with the features needed in an enterprise environment.

Things like being able to ship data to and from a wide variety for sources such as Hadoop, Elasticsearch, MongoDB, Splunk, and Kafka. Or fully encrypted lossless transports to ensure log data gets where it needs to securely. Or the ability to route log data from X sources to Y destinations from a single system so you can unify your log collection and management. Or the ability to easily filter out the noise in our environment so you don’t waste storage and licensing costs on data you don’t need.

You do not have to rely on siloed log data, riddled with gaps yet full of useless information.

You don’t have to settle for just meeting compliance requirements or company directives.

  • You can do something incredibly useful with all that data if we can find the needles in the haystack. All of that starts though with collecting, transporting, and filtering all the data flowing through your network.

What if you could collect all the logs, all the disparate data, even integrated script outputs with one tool instead of multiple agents and systems? 

What if you could manage your compliance and still forward to a SIEM system for security? 

What if you could get more useful data and still spend less on your SIEM?  What if you don’t want to use your expensive SIEM (or don’t have one) for all your log aggregation and search needs but still need a search appliance for lower value data? 

This is possible and Novacoast are in a position to show you how and guide you on your journey.

Anonymous
Related Content