Enhance intrusion detection systems with AI and correlation

Intrusion detection systems (IDS) and intrusion prevention systems (IPS) are great when adversaries are on the outside looking in. For enemies already inside the walls, however, it’s time to arm your team with the superpowers they need to find and eliminate those threats. Don’t get me wrong, the security solutions deployed at the edge of the security stack are a necessity and will likely evolve with threats as they get more and more complex. As we learned with the SolarWinds breach and the more recent Exchange Server exploit, the enemy is already living and expanding inside the infrastructure. What solutions and capabilities do we need to arm our analysts with to discover, understand, and eliminate these threat actors that are so deeply entrenched?

In short, the analysts combatting threats like SolarWinds must be empowered… no, superpowered, to meet and find the complex infections that plague us. While we don’t have an iron suit or magic space stones, we do have a variety of AI solutions that can help us. Coupling that AI with the ability to correlate data from a myriad of sources integrates and augments the capabilities of the existing security stack. Creating a level of normalization across the vendors enables an automated, orchestrated response regardless of platform. This allows rapid, if not automated, correlation that when presented to the Security Operations Center (SOC) analysts, empowers them to make decisions at machine speedSuper-powered speed even.

Why is AI so important to this concept? The threats that are facing the analyst are moving far beyond simple signature-based detection. Basic indicators of compromise (IOC) no longer consist of one or two behaviors that show an intrusion, but instead are comprised of multiple clues that are not easily shown to be related without a true AI there to sift through the noise. Both unstructured machine learning and supervised AI design must be leveraged to find the anomaly hiding in plain sight. This empowered X-Ray vision must be coupled with the ability to take immediate action, while at the same time providing our augmented analysts with the power to make far-reaching decisions at near-machine speeds.

Lastly, this detection engine needs to be deep in the core of the infrastructure, not just at the edge, because if the threat is already inside the environment, you only see the threat when your data waves goodbye on its exfiltration journey. Instead, why not look for that pattern of life deep inside your network? Let the AI map the relationships and behaviors of your users, devices, and applications in their daily lives before enacting its swift enrichment and response.

Only with AI can we readily identify that a trusted system has been compromised, allowing the analyst to be empowered to act. AI isn’t there to see the threat but to confirm the threat and take large-scale remediation, while the analyst must be able to gather the data to justify their response. This is where security orchestration, automation, and response (SOAR) in particular shines, as these repeatable actions must be automated to cut down the response time and get ahead of the intrusions.

It’s all well and good to be concerned about content, but it’s really the behavior that tells the story. In the case of SolarWinds, the enemy hid in the white noise that makes up the background of every network. AI, however, could (and did) recognize those abnormalities, alert the SOAR platform, and drive the correlation that finds and fights enemies already inside the walls.

Why fly when you can SOAR? 5 things you’re getting wrong about security orchestration, automation and response

Security orchestration, automation, and response (SOAR) solutions are often billed as a panacea that will solve all of a security operations center’s (SOC) problems, reduce mean time to repair (MTTR), improve efficiency, act as a single pane of glass, and even make a really good cup of coffee. You name it and someone somewhere has claimed that a SOAR platform can do it. The truth, however, is a little more complicated.

Yes, a SOAR solution can automate a great number of tasks—if properly implemented. If a task can be broken down into steps that are repeatable, reusable, and consistent, then it has the potential to be automated. But if an organization tries to take on too much at once or is unfocused in its approach, the implementation can rapidly get out of hand and lead to failure and ultimately shelfware. Here are a few examples of common mistakes and misconceptions about SOARs.

Boiling the ocean

A SOAR solution can be incredibly powerful; the initial desire to automate everything in sight is akin to the first time you get a label maker. You want to apply it to everything, all at once. Some of the worst experiences I’ve seen have come from an environment where they tried to build a complex interweave of use cases and became bogged down in the details and frustrations. The key to a successful implementation is to start small. Find one or two simple use cases that allow the SOC team to get a handle on what can be done and the thought process to build the use case. Initial simple automations and response actions such as threat enrichment of an IOC (indicator of compromise), hash, or URL are particularly effective as they can be easily reused as part of more complex actions later.

Training? I don’t need any stinkin’ training!

Yes, you do. While this is often the first thing on the cutting room floor when budgeting for a new solution, training usually makes the difference between a successful implementation and a package becoming shelfware. This is the opportunity for your team to ask questions of the people who implement and use the technology daily. Take advantage of it. A SOAR platform, like most integration-focused solutions, has many hidden features and nuances to how complex actions like a workflow are created. These are going to be automated actions that are hopefully going to run your business and you’ll need to understand how they are constructed.

I have scripts, isn’t that the same thing?

Most engineers, analysts, or administrators who have worked in IT for more than a few years have ended up running into tasks that they find themselves doing repeatedly. Inevitably, someone on the team will write a script, whether it is Visual Basic, a batch file, or a snippet of Java for each of those routine tasks. Those scripts are running continually in SOC near you right now. So, the question becomes: If I’ve already got scripts running, why do I need a SOAR? Remember, SOAR stands for security orchestration, automation, and response. Automation refers to performing singular tasks repeatedly, orchestration is putting multiple singular tasks together, and response is really the key because it’s the ability to evaluate, make a choice, and then perform additional actions. The ability to build-in complex response actions, either in an automated fashion or via human interaction, is one of the primary differentiators of a good SOAR platform. This doesn’t mean throwing the scripts out, it means taking them and converting them into SOAR workflows that can provide response choices, in-depth auditing and error tracking, and consistent integration across multiple platforms. This is where SOAR sets itself apart.

It will be done tomorrow right?

Not likely. While an initial set of use cases or workflows can usually be imported from the SOAR vendor, they still need to be customized to your environment. For instance, it may have been written for a different firewall or threat feed vendor. Each of these steps will need to be verified and tested with the current version of the existing platforms deployed in the environment. A simple version difference in the target platform can make a huge difference. Which brings us to…

Integrations are simple

Umm, no. To be successful, a SOAR platform will need to communicate with many different platforms that already exist in your environment. Let’s face it, the IT space is full of companies that are often competing with one another in multiple verticals and one vendor is rarely sole-sourced throughout the organization. It’s not uncommon to see vendors significantly change APIs, database structure, architecture, and platforms in between versions with either missing or incorrect documentation to go with it. These changes are not made to purposefully break outside integrations but are instead made with their own interests in mind. Simply put, IT infrastructures are complex environments with lots of moving parts that need to be carefully integrated to get the best value from the solutions. Often the response from vendors’ support teams will boil down to “not my problem.” Ultimately, a good SOAR vendor will try and keep up with the integrations as new versions are released, but some of this will also come back to a good relationship between you and your vendor. Simply letting them know that a new version released and that you intend to upgrade soon can change the integration team’s process to better support you.

Things to keep in mind

So, what are the main takeaways? SOAR solutions can be incredibly powerful enablers of the cyber and operations teams if some simple rules are followed:

  • Stay focused. Choose a singular task to learn what works in your organization. Use this as your inhouse training scenario to learn the process.
  • Take your time. Diagram the workflow on a whiteboard and take your time finding the lowest common denominator to help pick one or two use cases to leverage as your showcase.
  • Identify simple integrations. Choose the deployed solutions that can be easily integrated to start with. Typically, they will be API driven and allow you to combine with threat enrichment to see immediate benefits.
  • Re-use. Ideally, your SOAR platform allows you to reuse the work you’ve already done. You’ve created the first piece of the puzzle for the future and you can leverage that same structure and concept again to reduce the amount of effort on your next workflow.

Merlin Cyber has partnered with Swimlane to help our public-sector customers avoid these and many other challenges that they encounter. Swimlane provides a comprehensive SOAR platform leveraging a drag-and-drop workflow builder that enables organizations to rapidly build and deploy workflows to the field. With built-in case management, auditing, reporting, and a robust integration library, Swimlane provides environments with the tools they need to be successful.

If your organization wants to rapidly improve staff efficiency and drastically decrease MTTR by leveraging a powerful SOAR platform, we can demo Swimlane and help customize a solution that meets your objectives. 

Gain complete control and visibility of 50K endpoints in just 48 hours

Congress is working on another coronavirus relief package and telework measures are among the provisions being discussed. One group of senators is urging Congress to maintain maximum telework for federal employees throughout the pandemic. Another group of senators wants to see additional funding for upgrading agency IT systems. As they await a final bill and begin making decisions on end-of-fiscal-year dollars, federal agencies should strongly consider investments that enable effective telework, at scale, for the foreseeable future.

1E’s Tachyon is one such investment. Tachyon is a real-time, modern endpoint management solution that simultaneously improves employee experience and IT monitoring and remediation of devices. The single-agent platform is efficient, easy to deploy, and entirely API driven. All its capabilities can be leveraged through ServiceNow or used to augment tools such as Splunk and Microsoft Endpoint Manager (MEM). These robust integrations remove the need for multiple agents and provide federal agencies with several benefits.

Seamless Telework Experience

Tachyon gives IT teams enterprise-wide visibility of their devices from a single dashboard. Synthetic “microtransactions” periodically test the impact of a load on the environment to help identify processes that are interfering with normal operations, and how. This helps IT accurately gauge device responsiveness and performance. With so many employees working remotely, the ability to see in real-time who’s working vs. who’s having issues is vital to improving the end-user experience.

Ticketing Workflow Automation & Reduction

Integrating with ServiceNow, Tachyon’s functionality can be accessed directly through a single console for incident tracking and remediation. Help desk staff can diagnose and fix issues directly from the ServiceNow admin page, significantly improving response rates and response times on incidents. Also, with Tachyon running in the background, remote workers get an enhanced version of ServiceNow’s virtual agent that enables self-servicing for common issues.

Real-Time Response & Remediation

IT staff can query endpoints and perform actions in a matter of seconds with Tachyon. When issues pop up, staff can address them by taking real-time control of endpoints across any of their environments. They can also prevent issues from replicating on other devices by setting new enterprise-wide policy controls. This proactive maintenance feature automates many mostly manual IT processes, bringing substantial efficiencies.

Tachyon’s Core Modules

Put Tachyon to the Test

Merlin is currently offering federal agencies a 48-hour implementation of Tachyon against the tool’s two main endpoint use cases: visibility and control. After initial requirements are fulfilled by the customer, the rapid implementation of Tachyon will be structured like this:

Day 1 (Visibility): Setup, pilot group, and testing

  • Stand up required infrastructure
  • Install 1E client in a pilot group of endpoints
  • Test the software in your environment
  • Gain complete visibility of all remote devices

Day 2 (Control): Analysis, roll out, and collaboration

  • Analyze performance data from Day 1
  • Gain control of remote devices and fix any issues
  • Expand 1E client beyond pilot endpoints
  • Enable core teams to use Tachyon

1E will provide a dedicated solutions expert, at no cost, who will help fast track the deployment of the platform in your environment. This two-day implementation can be used to manage up to 50,000 remote devices.

If your agency needs to modernize its endpoint management to enable maximum telework, scale-up ticketing and remediation with automation and self-service, and maintain a proper security and compliance posture, we can demo Tachyon and customize a solution to meet your objectives.

Learn more about 1E and their solutions



From zero (trust) to hero: DevSecOps to the rescue

With the proliferation of cloud-based applications, organizations are faced with complex challenges regarding security as a whole and how to provide controls around the data that now resides somewhere in Neverland. We have moved away from the idea of the workplace’s four walls, complete with well-known kill chains, and find that our data is moving to the cloud at an alarming rate.

Perhaps the largest issue when moving to the cloud is trying to figure out how to secure applications, and users, without adding overhead and complexity. The cloud is supposed to make our lives easier while ensuring that the bad guys can’t get in. On the surface, this seems like an easy fix, especially when you think of it in terms of the existing security infrastructure. Unfortunately, reality sets in, and you begin to see this magical space rapidly becoming a logistical nightmare. How am I going to secure all this? Who is going to vet my users? What happens if an application is compromised, and allows a nefarious user to crawl my properties from east to west? How fired am I going to be at the end?

This is where the DevSecOps approach comes to the rescue. The whole premise of DevSecOps is around placing security controls within the applications themselves. In days gone by, things like admin credentials and cross-application access controls were hard-coded into apps. While this was business as usual for many years, it has increasingly become a highly available attack vector for hackers. When you combine this with known, and previously unknown, CVEs it becomes a glaring loophole in your security posture.

WHITE PAPER: On Your Mark, DevSecOps, Go!

The most common method for addressing pre-production security gaps is to have a human security specialist review the code, perform the STIG process, and apply various toolsets to identify and remediate vulnerabilities. The inherent problem with this process is that security staff are often overrun and facing long backlogs as the Dev team increases the speed at which apps are ready for deployment. Adding Sec to DevOps allows the developers to inject security into the earliest processes and, by doing so, creates self-healing, self-remediating applications that are fully aware of known exploits and continually updated to reflect novel threats in a fully-automated process.

The second problem hard-coded credentials can present is in app-to-app communication. As it stands today, a vulnerable application can be compromised, allowing bad actors to view dependencies, make changes, or otherwise gain access to additional properties, all while masquerading as an approved application. This becomes an enormous concern, as some of the database, app, and user calls could cross multiple applications and provide access to something that may not have robust security controls baked in. Mainframes may be out of vogue, but they are often the legacy central repositories for huge amounts of data, which may only have the ability to provide basic credential authorization. We’re left to rely on the legacy app, while the lift and shift to DevSecOps may not be feasible with today’s technologies. We must secure these apps.

In conclusion, these are some key points to take away for properly securing your applications and users as you move to the cloud:

  1. DevOps is falling to the wayside. You must look at holistic solutions to inject security as early in the CI/CD pipeline as possible.
  2. App-to-app security is paramount. If your applications cannot fully vet what it is talking to, it becomes open to compromise.
  3. Secure your cloud containers. This seems like a no-brainer but be mindful of your cloud architect’s time and workload, with the realization that posture management can be fully automated.
  4. Apply multi-factor authentication (MFA) to everything. Move security controls as close to the payload as possible. Network segmentation is great, but it allows vulnerabilities to be exploited.

Future-proofing technology stimulus spend

During the COVID-19 outbreak, agencies have shifted much of their workforce to telework. The strain on existing infrastructures has made headlines, whether it be the DoD asking employees to avoid non-essential services while on the VPN or other agencies staggering work schedules and limiting overall Citrix users. Further complicating these issues is the increase in cloud-based resources. 

I recently heard from an agency user attempting to participate in a required training session. Even though the training was hosted in the cloud, the user needed to use the overburdened VPN to access it, and the result was poor video quality. The problem is clear: current remote access systems were not scoped for this flood of users. 

As it always seems to be, while IT operations and security teams deal with new complexities and challenges, malicious actors see newfound opportunities. CISA recently released a new alert (AA20-073A) that includes the following considerations regarding teleworking:

  • As organizations use VPNs for telework, more vulnerabilities are being found and targeted by malicious cyber actors.
  • As VPNs are 24/7, organizations are less likely to keep them updated with the latest security updates and patches.
  • Malicious cyber actors may increase phishing emails targeting teleworkers to steal their usernames and passwords.
  • Organizations that do not use multi-factor authentication (MFA) for remote access are more susceptible to phishing attacks.
  • Organizations may have a limited number of VPN connections, after which point no other employee can telework. With decreased availability, critical business operations may suffer, including IT security personnel’s ability to perform cybersecurity tasks. 

The COVID-19 stimulus bill passed in March provided agencies the resources necessary to address telework infrastructure and security needs. Rather timely to this funding, there is new guidance from OMB regarding updates to TIC 2.0, providing the ability to use cloud-based solutions to assist with these issues. More specifically, the OMB memorandum regarding TIC 3.0 provides for the following new use case:

Remote Users: This use case is an evolution of the original FedRAMP TIC Overlay (FTO) activities. This use case demonstrates how a remote user connects to the agency’s traditional network, cloud, and the Internet using government-furnished equipment (GFE).

So how can agencies leverage these new TIC 3.0 guidelines to alleviate current strain and security concerns, while future-proofing their investments? TIC 3.0 allows agencies to modernize and move towards embracing a zero trust architecture (ZTA) by removing the outdated “trusted vs. untrusted” model and instead focusing the perimeter around the endpoint. To do this, the focus should be on the following key principles:

  1. Remove traffic destined for the cloud from current remote access infrastructure, thus lessening the load on the overburdened systems.
  2. Leverage the scalability and elastic nature of the cloud to deal with any further unexpected surges of remote access.
  3. Institute the principle of least privilege for remote access to overcome some of the shortcomings of VPN technologies.
  4. Where possible, move to an “identity as the perimeter” approach, targeting security at the remote user.
  5. Secure both new and legacy applications as the move to ZTA occurs, thus ensuring critical legacy systems are not left unsecured.
  6. Provide the least amount of friction to the end-users!

By embracing TIC 3.0 and ZTA, agencies can augment current remote access capabilities (VPN, Remote Desktop, Citrix, etc.) by providing access to cloud applications without the need to use old remote access systems. Further, this can be done alongside the current infrastructure, avoiding the dreaded “rip and replace,” and increasing security along the way.

At Merlin, we scout innovative, emerging technologies and establish technology partnerships that allow us to effectively implement unique remote access strategies that incorporate zero trust principles. As the model below illustrates, we provide end-to-end secure access, leveraging highly scalable and elastic solutions. Using cloud-based and cloud-native technologies like Okta and Netskope Private Access can increase security while lessening the load on remote access infrastructures. Adding Silverfort unique SSO capabilities can bring those legacy systems into the security of today. 

While there is no quick fix for legacy remote access systems, agencies can take the first steps in their zero trust journey while augmenting the capacity of current systems and increasing overall security. 

Cyber hygiene starts with good tools configuration

Last month, the Government Accountability Office released a new report titled DOD Needs to Take Decisive Actions to Improve Cyber Hygiene. The GAO report found that the Defense Department is behind on three major cyber hygiene initiatives and lacks cybersecurity accountability among its leadership. If a critical government agency like the DOD struggles with cyber hygiene, what about a regular company?

An average-sized company usually has 25-plus security vendors. Organizations have implemented tool after tool in efforts to secure their data, systems, and users. This has left them with misconfigured, repetitive, or siloed tools and an uphill climb toward proper cyber hygiene.

RELATED: 5 of the biggest cyber hygiene myths

While proper cyber hygiene involves tools, training, and policies, having a fragmented toolset makes the concept a non-starter. Tool fragmentation and overlapping tool capabilities put additional burden on IT staff, making it difficult to respond to threats, quantify risks, or effectively manage an organization’s most critical security controls. As a result, the organization’s cyber hygiene suffers.

Poor cyber hygiene creates security vulnerabilities that require decisive action. It’s vitally important to correctly configure, maintain, and ensure that your security tools are effective. In other words, cybersecurity leaders should consider maximizing the ROI on already-purchased tools before adding new ones to their crowded ecosystem.

Tool-proof your cyber hygiene

Practicing proper cyber hygiene goes beyond just purchasing and implementing security tools. Using the tools correctly is what helps solidify overall cybersecurity posture. And it all starts with proper configuration of the tools you have.

Establishing configuration baselines is a fundamental but often overlooked cyber hygiene task. Why else is tool misconfiguration a frequent cause of breaches? While we rely on security tools to maintain proper hygiene, their effectiveness is entirely in our hands.

Here’s how to weigh the performance and usage of existing security tools:

  1. Analyze if the tools you’re using are engineered properly and behaving correctly. For example, if it’s a vulnerability scanner, is it updated and scanning your entire IT landscape? If it’s a next-generation firewall, are you using all the features appropriately?
  2. Review and score every tool with a critical eye. Try to rationalize each tool against your organization’s current and future needs. Move past qualitative descriptions and into quantitative analysis by ranking and scoring them with questions like:
    • Does this tool have a niche or special purpose?
    • Is it more or less secure than other options?
  3. Examine each tool’s actual configuration. Is it configured securely? Does it have default passwords or other weak controls? How easy is it to harden?

The complexity of today’s IT infrastructures coupled with security tool fragmentation and misconfiguration makes cyber hygiene challenging for companies of all sizes. Security tools are only as strong as an organization’s internal process for maintaining them. Luckily, there are solutions that automate much of the work and provide organizations with a comprehensive way to implement and maintain proper cyber hygiene.

5 of the biggest cyber hygiene myths

Tackling common misconceptions about enterprise security

Proper cyber hygiene is a desirable but sometimes elusive practice for many organizations. And it can be hard to separate fact vs. fiction. Read on as Miguel Sian, Merlin’s Director of Solutions Architecture and Engineering, busts a handful of security posture myths.

Cyber muyths busted graphic

Most organizations would agree that proper cyber hygiene is essential for maintaining their cybersecurity posture. Each will also likely affirm that they practice good cyber hygiene; yet, we find that many have considerable blind spots. We’ll shine a light on these blind spots by exposing five of the biggest myths about cyber hygiene.

First, a primer. What is cyber hygiene? The CERT Resilience Management Model (CERT-RMM) defines cyber hygiene as a set of practices for effectively managing the most common and pervasive risks to the organization. The Center for Internet Security (CIS) defines cyber hygiene as a set of baseline cybersecurity protections that help to secure an organization. Fundamentally, cyber hygiene involves the strategies and activities that ensure your enterprise IT security is in tip-top shape (health) and protecting your organization from threats (prevention).

RELATED: Cyber hygiene starts with good tools configuration

Proper cyber hygiene spans people, process, and technology. It starts with having complete visibility of all your assets, followed by effective security tools and processes to identify, detect, and protect your assets against threats. Last but not least, you must implement effective access management. With this as the backdrop, let’s quash five common myths about cyber hygiene.


“We have several management tools (i.e., NAC, SCCM) and a CMDB that ensure we know precisely what’s on our network.”

How many CISOs honestly believe that they have a truly accurate count of their hardware and software assets? Just one glance at two systems management tools (vulnerability management and Active Directory) would likely reveal a discrepancy of the total number of computer accounts in your enterprise. Furthermore, increasing cloud adoption and remote work can undermine what you believe might be on your network.


“My users and endpoints are adequately protected with endpoint security tools such as anti-virus and EDR, along with policies we’ve implemented to protect our devices.”

Anti-virus and endpoint detection and response (EDR) solutions have long been good practices for endpoint hygiene, but they are no longer enough. New, emerging threats in the hardware layer – on mice, keyboards, webcams, switches – can go undetected by these endpoint security solutions. Furthermore, attacks on the supply chain compound the risks from these emerging threats.


“We have security tools and processes established for configuration management, patch management, and vulnerability management that ensure our basic security hygiene.”

Organizations often overlook and fail to adequately monitor the tools themselves and processes that ensure these basic security hygiene tasks. This is likely a result of lacking a central place to monitor the configuration and effectiveness of all their enterprise tools. Furthermore, organizations typically can’t relate these security challenges to overall business impact. For a complete picture of cyber hygiene, it’s important to know the tools’ security posture and effectiveness in meeting the organization’s security controls, and how they protect the applications that deliver on the business outcomes.


“Our annual compliance audits against industry security frameworks provide adequate security and communications for our stakeholders.”

Regular audits are essential and frameworks such as NIST CSF provide a comprehensive set of security guidance. Yet, we’ve found that organizations are unable to continuously monitor their most critical security controls. As a result, organizations are unable to prioritize what’s truly important nor effectively communicate the risks across the enterprise.


“We have controls that ensure proper access management.”

If this is true, we should not be seeing an increase in data breaches since a majority start with privilege credential abuse. Organizations must take a comprehensive approach to access management. There are silos of identity sources and disparate identity management tools in the enterprise. This makes securing access across the enterprise challenging. It’s critical to establish visibility, then monitor the security controls for access to critical systems.

It’s time to take a strategic approach to cyber hygiene. With today’s rapidly shifting situation in IT and business, risks and uncertainties abound. A renewed focus on the basic fundamentals of cyber hygiene provides us with the key principles and foundation needed to establish a comprehensive cybersecurity posture for our enterprise.

Blog Series: Supporting the Secure Workforce — Cloud Services

Harnessing the Ubiquity, Speed and Scale of Cloud Services

In Part I of our 3-part blog series – Supporting the Secure Remote Workforce: A Prescriptive Approach on How to Respond to the Rapid Surge of Telework and IT Services, we described the three components that agencies should manage and secure in a remote telework scenario. To recap, these are Cloud Services, Endpoints & Identities, Cybersecurity & Enterprise Infrastructure. In this blog we will expand on how we can harness cloud services to enable the secure workforce.

Enabling Secure Cloud Access and VPN Services with Cloud Security Services/CASB

One of the security design patterns in CISA’s guidance utilizes cloud security services, more commonly known as cloud access security brokers (CASB). The CASB serves as a policy enforcement point and management entity, for users’ traffic destined for cloud service providers. Since a majority of user network traffic can be optimized with direct-to-cloud connectivity, a CASB serves as a practical solution for teleworkers.
CASB augments or adequately replaces the security stack typically found in traditional data centers or TIC. And because CASB’s core competency is “brokering” connectivity to thousands of cloud services, CASBs have established optimized network routing and technology integrations which further improves remote workers’ user experiences. Many CASBs have expanded their security capabilities to include secure web gateway functionality, network threat protection, IaaS compliance, VPN services, etc. Agencies should consider these new cloud security capabilities to consolidate their cybersecurity tools and simplify operations.

CARES Act – a $2 trillion stimulus package passed by Congress that calls for rapid expansion of citizen services and corresponding technologies to alleviate the stress to existing IT infrastructure services.
“How do we scale to support a growing need for online digital services?”

Other Cloud Services to Support the Digital Services and the Remote Workforce

Improving Citizen Services and Enterprise IAM with Identity as a Service (IDaaS)

With stimulus funding tied to increasing use of digital citizen services, agencies may need to rapidly develop and deploy citizen-facing web applications and resources that can benefit from highly scalable and secure cloud-based identity services. One practical use case is to quickly provision identity services in the cloud to augment, enhance or expand the existing identity & access management solution for agencies. Think of all the business processes, applications and enrollments that agencies may need to enable in order to provide citizen services.

This same identity services platform can also serve as a logical policy enforcement point for an agency’s remote users. Policy enforcements need to expand beyond traditional network access control points – especially for remote telework scenarios – to include user authentication. IT can centralize the authentication and authorization using cloud services allowing for ease of access, improved availability and scale. Consider the paradigm of the user’s identity as the new perimeter, where policies for multi-factor authentication, single sign-on, and adaptive access can be applied.

Secure Communications and Collaboration

Cloud-based communications and collaboration tools have become an essential part of our daily work and social life. With all the negative publicity you hear regarding the security and privacy of some web conferencing tools, it is essential to recognize that there are practical applications for more consumer-focused web-conferencing tools vs. enterprise communications tools. For agency remote teleworkers, the need for secure, enterprise unified communications and collaboration (UC&C) is essential. Security characteristics of an enterprise UC&C include 256-bit end-to-end encryption, compliance features such as archiving, enterprise integration, and administrative controls. Enterprise features such as 1:1 and group messaging, audio and video conferencing, and file sharing, and screen sharing are essential for users to remain productive. Common use cases for secure UC&C might be conducting emergency response, cyber incident response, sharing sensitive information containing PII or PHI, or highly sensitive/classified information.

Whether it’s for identity service, secure collaboration, email or other remote services, we can expect increasing demand for more cloud use cases due to the ease of use, scale and rapid deployment of cloud services. It’s important to understand how best to govern the use of cloud services, while providing a frictionless experience for your remote teleworkers and consumers of your cloud services.

Blog Series: Supporting the Secure Workforce — Teleworker Spotlight

A Prescriptive Approach on How to Respond to the Rapid Surge of Telework and IT Services

DHS CISA recently published an Interim Telework Guidance to support federal civilian agencies as they deal with the surge in teleworking. The guidance was issued to help agencies leverage existing resources to secure their networks and comes on the heels of the pandemic crisis as agencies face challenges with insufficient capacity and legacy infrastructure.

The DHS guidance specifically addresses the scenario of remote users connecting to agency-sanctioned cloud services. While this interim guidance is temporary and does not represent a particular TIC 3.0 use case, it will be integrated into the TIC 3.0 remote user use case. Importantly, it provides a blueprint for constructing resilient and flexible infrastructure ready to support what perhaps will become a “new normal” of ubiquitous cloud services supporting a remote workforce and digital citizen services.

Cloud Services, Network Challenges and Recommended Approach

It’s no surprise that cloud is the main focus of the guidance as many agencies have moved IT services to the cloud (i.e. email, collaboration, CRM). With the sudden surge of remote teleworkers, agencies’ network bandwidth, VPN devices and cybersecurity stacks are strained. This comes as a result of traffic hair pinning where remote workers’ traffic is routed through centralized trusted Internet connections. Legacy network architecture in the federal government is not optimized for the shift to a user-centric/direct-to-cloud network model.

With the telework guidance, CISA recognizes the need for agencies to support a more user-centric, direct-to-cloud network architecture. They provide guidance on how to effectively secure the network traffic specifically for remote teleworkers connecting to cloud services. Utilizing policy enforcement points and management services, remote users can securely connect to agency-approved cloud services without the need for hair pinning.

Components of a Secure Remote Workforce

What constitutes a secure telework environment? First, it helps to understand a couple of the constructs that CISA illustrates in their guidance, and defines more broadly in TIC 3.0 documentation: Policy Enforcement Point and Management Entity.

A Policy Enforcement Point is a security device, tool, function or application that enforces security policies through technical capabilities. Essentially, it’s a logical insertion point for control manifested through policies. A Management Entity is a notional concept of an entity that oversees and controls the protections for data. It can be represented through organization, network device, tool, function or application. Basically, the management entity becomes an aggregation point for policies information such that IT has control and the ability to analyze and make intelligent decisions.

With an understanding of these two constructs, we can logically think about the secure telework scenario along these three key components: Endpoints & Identities, Cloud Services, Enterprise Cyber Infrastructure.

These three components serve as our management entities and logical insertion points for policy enforcement. In this blog series, we will discuss technologies, application of technology, and how we can align to the security best practices in CISA’s guidance.

Blog Series: Supporting the Secure Workforce — Cyber Resilience

Manage and Secure the Endpoints – Protect the Enterprise

“Down to just essential personnel working onsite, how do I support this rapid surge of remote teleworkers and IT services?”

Surge Readiness of People, Process and Technology

This is a common theme we hear a lot from our customers. Operational efficiency is critical to successfully address this surge. We see this firsthand with the growing adoption and use of cloud services. When enabling the secure remote teleworker, besides the cloud service, there are two other critical control points for policies and management: the endpoint and the enterprise cyber infrastructure. These two critical control points are inherently intertwined where configuration settings, controls, and policies are applied and continuously feed information to each other to adapt and improve overall security posture.

How Secure are Your Endpoints? The Need to Protect Against Peripheral-Based Threats

Threats to our endpoints continues to evolve. Whereas anti-virus/ malware technology used to be adequate for endpoint security, threat actors are using signature-less, file-less or zero-day attacks on our endpoints essentially making traditional anti-virus/malware tools less effective. As a result, endpoint security has evolved to include endpoint detection and response (EDR), and more broadly endpoint protection platforms (EPP). Many of these solutions use machine learning and utilize the cloud for speed, scale and operational efficiencies. We strongly recommend the use of EDR/EPP as a first line of defense for your endpoints. Specifically, for remote teleworkers utilizing a cloud-based EDR solution can improve IT operational efficiencies with easier updates, threat detection and response.

Another threat vector that is emerging are rogue peripheral device attacks. Unlike threats that capitalize on vulnerabilities in the operating system or applications, rogue device attacks operate at the physical layer, beneath traditional detection mechanisms. Often appearing as trusted devices to the operating system (i.e. USB hubs, keyboards, mice, etc.), they can bypass device policies and pose a hidden threat to endpoints. Recently, this BadUSB attack concealed as a fake Best Buy gift card shipped to a hospitality customer contained malware. This threat can be more acute in remote telework scenarios with the vast number of consumer peripherals and lack of IT visibility.

The image below is of a compromised mouse which contains a wireless controller that captures and transmits data to external sites. Supply chain hacks such as these have become more prevalent and agencies need a way to protect against them.

Ensure Productivity with Comprehensive Endpoint Visibility and Control

Real-time visibility, control and compliance of endpoints — especially in remote telework scenarios — is critical for operational effectiveness. A performant, functional and secure endpoint is crucial for agency teleworkers to remain productive and deliver on the agency’s mission. Proactively monitoring, measuring performance and remediating at scale is a critical element of the secure, remote workforce.

Monitor and Maintain Cyber and Enterprise Infrastructure Resiliency

This brings us to the last logical control point of our approach and arguably the most critical component: our agency’s cyber and enterprise infrastructure. IT services, policies, technologies and staff all emanate from our agency’s own premises. This still holds true even as we support a secure remote workforce.

The Principles of Zero Trust

The Interim Telework Guidance speaks well to the need for establishing good cybersecurity hygiene for teleworkers and cyber infrastructure. Specifically, doing standard practices like backup & recovery, vulnerability assessments, auditing and inventory should be standard operating procedures. Merlin has developed a Zero Trust Security model that contains foundational and critical security principles that support a secure remote workforce. These are based on the core tenets of: Identity, Workload and Network Security.

Having comprehensive telemetry and appropriate policy control points to secure the remote workforce is enabled through this zero trust security model.

Adapt, Automate, Detect, Respond

With the expanded threat landscape brought by the remote workforce, it is important to ensure that your cyber defense tools can adapt to the changing environment. Machine learning/AI-based solutions can effectively detect and protect your network against known and unknown threats. Furthermore, it’s important to ensure that your solution can integrate with the control points we discussed, whether they reside in the cloud, endpoints, or infrastructure.

With limited staff and growing demand on IT, orchestration and automation are even more relevant. Turning rudimentary, manual processes into automated workflows saves time for IT. An extensible platform with an open API framework provides quick and seamless integration into enterprise security tools, business systems and corresponding workflows.

At Merlin, we partner with market leaders and innovators in cybersecurity to bring you mission-ready solutions. We have a comprehensive approach to delivering an end-to-end security framework based on zero trust security principles to secure you remote workforce. Reach out to us for a briefing or demo of any of the solution capabilities described in our blog series.