The effectiveness of ransomware is increasing – due in part to the use of the non-federalized form of payment known as cryptocurrency, but more so due to the effective and dynamic nature of the initial attack vector used in a ransomware campaign. The sophistication of the tools being used, as well as attack strategies that prey on poor cyber hygiene practices, are the core components that can bring an organization to a screeching halt and result in defeat for cyber defenders.
Ransomware, at the core, threatens the reputation and operations of a business by denying service consumption and access to assets. Taking it up a notch, threat actors may foster further damage and exploitation by publicly exposing data or leaking critical, sensitive information to public warehouses. Corporations and government organizations alike must take a militant approach to secure their infrastructure if they wish to remain viable and should consider an approach to managing ransomware threats that is akin to campaigns of war.
On the offensive, cyber attackers gain the knowledge necessary to penetrate an organization’s landscape using techniques such as phishing and social engineering. Once inside, threat actors begin searching for vulnerabilities to gain access to assets, hunt for valuable organizational data, and ultimately encrypt data that they believe is valuable and hold it ransom for a substantial payout.
On the defensive, cybersecurity professionals will use security awareness training, institute policies and procedures for cyber hygiene and data backup strategies, and deploy privileged access management (PAM) and endpoint cybersecurity tools.
The war is won either by hackers receiving their ransom or the infrastructure team successfully thwarting their attack. Let’s take a closer look at the battle strategies.
An organization’s entire IT/OT infrastructure is susceptible to ransomware attacks, from the routers and switches that pass data all the way to the endpoints in which the application transactions occur. Organizations must consider that an attacker is not a lone individual sitting in a basement eating delightful chips hoping to score a single, random incident, but rather a team of cyber engineers working together to find the greatest-valued assets at the largest companies, to get the largest payoffs. Crowdstrike has described the most efficient technique of targeted ransomware deployment as “Big Game Hunting,” the art of targeting institutions that are likely to pay high ransom due to the criticality of services.
To deter detection, cybercriminals leverage ransomware-as-a-service (RaaS) tools, which highly morph their signatures to keep the execution footprint from being detected by anti-virus scanners looking for static signature executables or programs that exhibit common patterns. To assist in the attack’s impact, ransomware tools look to hunt and destroy backups stored on local devices, making the option of recovery that much more difficult. But, bottom line, it all starts with the encryption of files to remove access from the organization.
Security awareness and training are at the cornerstone of creating the best defensive strategy for an organization. Increasing employee awareness of how cybercriminals use phishing and social engineering to obtain the vital information that will allow them to gain access and then move laterally throughout the organization, to encrypt and hold valuable assets for ransom, is the most effective defense. Beyond educating a human firewall, maintaining proper cyber hygiene is the most effective way to ensure proper policies and procedures are implemented across the organization to minimize the impact of a ransomware attack.
Several policies can be immediately instituted to increase the likelihood of surviving a ransomware attack.
- Separate the backup of critical data from the physical access of the asset in question. If the ransomware application can’t remove it or destroy it via encryption, then it is a viable path to recovery.
- Utilize endpoint security tools that enforce the restriction of file read, write, and modify access to unknown applications. This defeats the ability of unknown applications to encrypt or write new data in new locations.
- Remove local administration abilities to elevate privilege.
- Deploy tools that enable accurate threat detection and IOC awareness. Such threat intelligence will give IT/OT organizations time to evaluate the potential threat, investigate, and mitigate the damage effectively.
Merlin Cyber has solution offerings that can help implement these strategies. Cyber Observer, our cyber hygiene solution, manages and monitors the hygiene in an organizational environment to determine if policies are being effectively enforced in real-time. Incorporating the combination of CyberArk for PAM and VMware Carbon Black EDR for file-level access control delivers the essential components which mitigate much of the effectiveness of ransomware attacks. These solutions stop bad actors from elevating privilege and thwart them from encrypting files. Lastly, Darktrace’s sophisticated AI/ML engine can detect and stop actors trying to identify vulnerabilities and laterally move across an organization.
Ransomware strategies are becoming more effective as threat actors consider the landscape of the targeted organization and the preventative nature of the tools which may be available to detect the campaign. IT/OT organizations must take a serious approach to their infrastructure and limit their exposure to vulnerabilities by employing strict cyber hygiene practices to limit elevated privilege and file management control to prevent unauthorized and unintentional writes at the endpoint. Defense teams must also evaluate how vulnerable a backup is based upon live data’s proximity to the archived backup asset.
Merlin’s solution offerings can be combined to create a strong defense to the dynamically changing attack strategy of the Ransomware War.
The Secretary of Commerce must solicit input from the federal government, private sector, academia, and other appropriate actors to identify existing or develop new standards, tools, and best practices for complying with secure software development standards and procedures identified in President Biden’s Executive Order (EO) on cybersecurity. The scope of the EO’s Section 4 on software supply chain focuses on the ability of software manufacturers and software developers, in particular, to validate all components of the sub-systems which support their offerings. It also focuses on best practices for assessing the risk of included components in their offerings, either in pure form, object, or executable, which cannot be verified or validated to their true origins. Furthermore, the EO solicits guidance from industry including best practices for identifying breaches in the management of the software supply chain, and allows for multiple agencies to receive such alerts and ingest threats into their systems, enabling analysis at a much greater velocity than has been achieved before.
EO RESOURCE CENTER: Learn how to accelerate your ability to meet the requirements
Whether it is an entire platform or a single library, the software lifecycle starts with one or more use case(s). First, a design contains features and functions which address the use case as well as meet the financial goals of the organization. Next, the solution is vetted and management accepts the cost for the development of the software. Engineers then come together and combine reusable objects (development libraries, OS libraries, compilers, web services, databases, etc.) with code and develop a solution, which becomes a release candidate. Along the way, documentation around the successful, as well as not-so-successful, development efforts are compiled. Once it is deemed viable, testing occurs with the candidate, and depending upon the outcome of the testing, the candidate is officially released. The release can then be sold, distributed, and delivered in many forms to consumers.
With so many moving components to the software lifecycle, threats can enter the solution at multiple phases. An approach to addressing security vulnerabilities within a software supply chain will need to:
- Utilize a cybersecurity posture regarding the policy of identifying security vulnerability indicators and warnings promptly
- Alert about the elevation of access during the composition and execution of an offering, eliminating any unforeseen introduction of vulnerabilities
- Provide both positive and negative artifacts during the software supply chain process (events captured can be shared and readily imported to any consumer data lake for risk analysis)
Looking at the software supply chain from an obtuse to an acute way, a security solution should start with creating policy around a proper build cycle that produces artifacts concerning the success and failure of build, test, and deployment. These artifacts are the cornerstone to which a risk assessment can be made. Additional components necessary to help mitigate supply chain risk include a vulnerability assessment of the target solution and target platform. Looking more closely at what glues the solution together, a static/dynamic code analysis tool should also be leveraged during the overall build/test process to mitigate the risk of introducing unforeseen vulnerabilities downstream to the consumer. Examining what comprises a solution shouldn’t stop at the application itself but should extend to secondary and tertiary dependencies upon which solutions depend.
Vulnerabilities can present themselves in many ways and a vulnerability scan tool utilized during the testing process will assist in mitigating risk. The scope of the vulnerability scan needs to consider all components north and south with regards to the solution to consider the full potential scope of the vulnerability risk assessment. As these lower-level components are leveraged, additional policy regarding software supply chain validation needs to be enforced. Sub-systems and repository sources will need appropriate attestation to their validity and can be achieved using cryptographic mechanisms to verify component integrity. With sub-systems also relying heavily on platform as a service (PaaS) technology, there should be consideration given to vetting the location of platform components to include OS/Container image validation.
Merlin Labs builds Proof of Concept integrations with several best-in-class cybersecurity partners which demonstrate market-leading solutions to difficult real-world problems, including supply chain security. Some of these tools address CI/CD DevSecOps, application security, and application access management. For example, the combination of CyberArk and Contrast Security can help federal agencies meet the EO’s Section 4 requirements.
CyberArk’s Platform Access Security/Application Access Manager is a critical piece of the puzzle. By managing least privilege to the application layer, it can manage access control and work to leverage threat analysis based upon behavior from within applications. The addition of Contrast gives software providers real-time remediation guidance and attack protection through inline use, cutting valuable time to market due to inherent risks via code practices or dependent modules. With these and other partner solutions, Merlin offers comprehensive solutions for securing the software supply chain.
The Cybersecurity Executive Order (EO) comes at a time when government, businesses, and our way of life are increasingly being disrupted by cyberattacks. It is no wonder that the EO takes an ambitious and comprehensive approach with aggressive timelines on policies, procedures, and technology modernization initiatives. The 7 key sections of the EO reveal two consistent themes: 1) Improve public-private collaboration and 2) Accelerate modernization.
Improve public-private collaboration
We can’t succeed without each other. The EO makes it clear that in order to succeed in defending against today’s threats, the government and the private sector must further strengthen their collaboration. While this public-private partnership has always existed, barriers still exist that create challenges with information-sharing, effective collaboration, and accountability.
The importance of close collaboration became more evident with the recent SolarWinds software supply chain compromise and Microsoft Exchange Server zero-day vulnerabilities. In the SolarWinds attack, it was through the detection of a cybersecurity firm that initially exposed a highly sophisticated campaign that may have begun several months prior to being detected. It was reported days later to government and law enforcement, who then mobilized their incident response.
An even more dangerous vulnerability was discovered just weeks later. With the Microsoft Exchange Server vulnerability potentially impacting hundreds of business systems, the FBI took the unprecedented action of remotely accessing these private servers to remove a web shell backdoor program used by attackers.
The rapid pace of technological innovation and the government’s increasing reliance on technology to deliver on its mission bring to light the need for closer partnership between the private and public sectors.
We need to move faster. When it comes to cybersecurity, speed is vital. The ability to rapidly detect threats and respond to incidents are necessary to keep continuity of business. They are also often the measure of effective security operations. The EO recognizes that for government to keep pace with its adversaries, it needs to accelerate technology modernization.
The EO recommends that government agencies expedite the use of cloud services to quickly and securely move towards a more resilient cybersecurity architecture. Similarly, the EO requires improvements to agencies’ security operations and their ability to identify, detect, and respond to vulnerabilities and incidents.
It is worth noting the focus on zero trust architecture and capabilities of multi-factor authentication (MFA) and data encryption. These capabilities are essential to good cyber hygiene. They help ensure that additional security modernization efforts are built on a strong security foundation. Securing user identities and data security are the cornerstones of an effective zero trust security strategy.
How can Merlin help?
The 7 key sections of the EO reveal logical intersections between the two objectives to improve public-private collaboration and accelerate modernization. As we analyze the EO’s requirements to determine how we can best serve government and industry, we find that these intersections present us with great opportunities for efficiencies and maximize results on our efforts.
At Merlin, we believe that we are well-positioned at these intersections. With industry-leading partners, innovative solutions, and a secure cloud platform, Merlin can help the government with modernization, secure cloud adoption, and security operations.
Converging at the nexus of security and cloud
The EO requires agencies to prioritize cloud technologies as a faster path towards modernization and zero trust architecture. At Merlin, we offer cloud-based identity security, endpoint security, and data security solutions. Since these solutions are delivered from the cloud, they are quick to deploy and provide rapid time to value. Our identity security solutions enable adaptive MFA and risk-based authentication to all assets on the network. To protect high-value assets, we secure privileged credentials with comprehensive privileged access management.
To secure agencies’ journey to the cloud, we offer cloud security solutions that secure cloud access and protect critical applications across the cloud infrastructure. Cloud has expanded the network perimeter and has become one of the key drivers for the move towards zero trust architecture. At Merlin, we take a holistic approach to zero trust architecture. We apply zero trust security principles to all endpoints, applications, and identities. With our holistic zero trust security, users’ and network access are provided in a least privilege model and continuously verified. Resources are protected with granular-level security with the ability of automated remediation to continuously enforce zero trust principles.
We follow these core tenets to ensure zero trust is applied across the different layers of your infrastructure:
- Identity as a Perimeter
- Least Privilege
- Intrinsic Workload Security
- Integration & Automation
- Security Analytics
With zero trust security applied throughout the network, agencies can greatly improve the effectiveness of their security operations. The EO requires that agencies implement endpoint detection & response (EDR), logging, and standardized playbooks. At Merlin, we offer solutions for security operations that help our customers apply security analytics and automation to quickly identify and respond to threats and anomalous network activity.
Our cloud-based EDR collects host-based telemetry data for expanded visibility and control of endpoints. Combining threat intelligence data and deep analytics, threat hunting teams can use the cloud-scale data lake to proactively hunt for threats on the network. Security orchestration, automation & response (SOAR) enriches EDR data with additional telemetry data from SIEM, network threat detection, threat intelligence, and other sources, providing better contextual information on incidents and threats.
To stay one step ahead of adversaries, agencies must continue to adapt and thrive in a dynamic and evolving threat landscape. Merlin continuously analyzes the cybersecurity landscape for emerging technologies and innovative solutions to help our customers with their toughest cybersecurity challenges. Cloud has proven to be an optimal strategy for cybersecurity companies to deliver their software quickly, and for agencies to consume more easily.
Earlier this year, Merlin Cyber launched Constellation GovCloud, a FedRAMP managed service offering that accelerates our OEM partners’ journey towards FedRAMP authorization. This turnkey, platform-as-a-service built on AWS GovCloud reduces the costs and complexity of FedRAMP by meeting nearly 80 percent of the controls.
As more stringent requirements are placed on software OEMs to comply with secure software development and testing practices, the OEMs are looking for more effective ways to ensure that they can attest to and demonstrate conformity. Non-compliance can mean removal from government contracting vehicles. Pursuing FedRAMP authorization becomes a viable strategy for companies to demonstrate compliance. Using a FedRAMP authorized cloud service, OEMs can benefit from the baseline security controls and continuous monitoring functions prescribed by FedRAMP for the IaaS and PaaS, thereby demonstrating compliance with the software security requirements in the EO.
Constellation GovCloud benefits our OEM partners with a path towards FedRAMP authorization and the software security compliance that comes along with FedRAMP. At the same time, government benefits from access to a growing number of secure, software-as-a-service cloud solutions.
Winning the battle requires strategy and execution
For nearly 25 years, Merlin has delivered innovative solutions that help our clients reduce security risk and simplify IT operations. We continue to transform our business to ensure that we are constantly delivering value to our clients. Delivering value is in our DNA.
We formed strategic partnerships with the world’s best-in-class cybersecurity brands to provide our clients with solutions they know and trust. Today, we partner with market-leading and trusted cybersecurity vendors such as CyberArk, Darktrace, Netskope, Okta, and Swimlane. We launched Constellation GovCloud to accelerate our OEM partners’ journey towards FedRAMP, and to expand their routes to opportunities in federal. In these unprecedented times for cyber defenders, Merlin stands ready to partner with government and industry to face these challenges.
Intrusion detection systems (IDS) and intrusion prevention systems (IPS) are great when adversaries are on the outside looking in. For enemies already inside the walls, however, it’s time to arm your team with the superpowers they need to find and eliminate those threats. Don’t get me wrong, the security solutions deployed at the edge of the security stack are a necessity and will likely evolve with threats as they get more and more complex. As we learned with the SolarWinds breach and the more recent Exchange Server exploit, the enemy is already living and expanding inside the infrastructure. What solutions and capabilities do we need to arm our analysts with to discover, understand, and eliminate these threat actors that are so deeply entrenched?
In short, the analysts combatting threats like SolarWinds must be empowered… no, superpowered, to meet and find the complex infections that plague us. While we don’t have an iron suit or magic space stones, we do have a variety of AI solutions that can help us. Coupling that AI with the ability to correlate data from a myriad of sources integrates and augments the capabilities of the existing security stack. Creating a level of normalization across the vendors enables an automated, orchestrated response regardless of platform. This allows rapid, if not automated, correlation that when presented to the Security Operations Center (SOC) analysts, empowers them to make decisions at machine speed. Super-powered speed even.
Why is AI so important to this concept? The threats that are facing the analyst are moving far beyond simple signature-based detection. Basic indicators of compromise (IOC) no longer consist of one or two behaviors that show an intrusion, but instead are comprised of multiple clues that are not easily shown to be related without a true AI there to sift through the noise. Both unstructured machine learning and supervised AI design must be leveraged to find the anomaly hiding in plain sight. This empowered X-Ray vision must be coupled with the ability to take immediate action, while at the same time providing our augmented analysts with the power to make far-reaching decisions at near-machine speeds.
Lastly, this detection engine needs to be deep in the core of the infrastructure, not just at the edge, because if the threat is already inside the environment, you only see the threat when your data waves goodbye on its exfiltration journey. Instead, why not look for that pattern of life deep inside your network? Let the AI map the relationships and behaviors of your users, devices, and applications in their daily lives before enacting its swift enrichment and response.
Only with AI can we readily identify that a trusted system has been compromised, allowing the analyst to be empowered to act. AI isn’t there to see the threat but to confirm the threat and take large-scale remediation, while the analyst must be able to gather the data to justify their response. This is where security orchestration, automation, and response (SOAR) in particular shines, as these repeatable actions must be automated to cut down the response time and get ahead of the intrusions.
It’s all well and good to be concerned about content, but it’s really the behavior that tells the story. In the case of SolarWinds, the enemy hid in the white noise that makes up the background of every network. AI, however, could (and did) recognize those abnormalities, alert the SOAR platform, and drive the correlation that finds and fights enemies already inside the walls.
Security orchestration, automation, and response (SOAR) solutions are often billed as a panacea that will solve all of a security operations center’s (SOC) problems, reduce mean time to repair (MTTR), improve efficiency, act as a single pane of glass, and even make a really good cup of coffee. You name it and someone somewhere has claimed that a SOAR platform can do it. The truth, however, is a little more complicated.
Yes, a SOAR solution can automate a great number of tasks—if properly implemented. If a task can be broken down into steps that are repeatable, reusable, and consistent, then it has the potential to be automated. But if an organization tries to take on too much at once or is unfocused in its approach, the implementation can rapidly get out of hand and lead to failure and ultimately shelfware. Here are a few examples of common mistakes and misconceptions about SOARs.
Boiling the ocean
A SOAR solution can be incredibly powerful; the initial desire to automate everything in sight is akin to the first time you get a label maker. You want to apply it to everything, all at once. Some of the worst experiences I’ve seen have come from an environment where they tried to build a complex interweave of use cases and became bogged down in the details and frustrations. The key to a successful implementation is to start small. Find one or two simple use cases that allow the SOC team to get a handle on what can be done and the thought process to build the use case. Initial simple automations and response actions such as threat enrichment of an IOC (indicator of compromise), hash, or URL are particularly effective as they can be easily reused as part of more complex actions later.
Training? I don’t need any stinkin’ training!
Yes, you do. While this is often the first thing on the cutting room floor when budgeting for a new solution, training usually makes the difference between a successful implementation and a package becoming shelfware. This is the opportunity for your team to ask questions of the people who implement and use the technology daily. Take advantage of it. A SOAR platform, like most integration-focused solutions, has many hidden features and nuances to how complex actions like a workflow are created. These are going to be automated actions that are hopefully going to run your business and you’ll need to understand how they are constructed.
I have scripts, isn’t that the same thing?
Most engineers, analysts, or administrators who have worked in IT for more than a few years have ended up running into tasks that they find themselves doing repeatedly. Inevitably, someone on the team will write a script, whether it is Visual Basic, a batch file, or a snippet of Java for each of those routine tasks. Those scripts are running continually in SOC near you right now. So, the question becomes: If I’ve already got scripts running, why do I need a SOAR? Remember, SOAR stands for security orchestration, automation, and response. Automation refers to performing singular tasks repeatedly, orchestration is putting multiple singular tasks together, and response is really the key because it’s the ability to evaluate, make a choice, and then perform additional actions. The ability to build-in complex response actions, either in an automated fashion or via human interaction, is one of the primary differentiators of a good SOAR platform. This doesn’t mean throwing the scripts out, it means taking them and converting them into SOAR workflows that can provide response choices, in-depth auditing and error tracking, and consistent integration across multiple platforms. This is where SOAR sets itself apart.
It will be done tomorrow right?
Not likely. While an initial set of use cases or workflows can usually be imported from the SOAR vendor, they still need to be customized to your environment. For instance, it may have been written for a different firewall or threat feed vendor. Each of these steps will need to be verified and tested with the current version of the existing platforms deployed in the environment. A simple version difference in the target platform can make a huge difference. Which brings us to…
Integrations are simple
Umm, no. To be successful, a SOAR platform will need to communicate with many different platforms that already exist in your environment. Let’s face it, the IT space is full of companies that are often competing with one another in multiple verticals and one vendor is rarely sole-sourced throughout the organization. It’s not uncommon to see vendors significantly change APIs, database structure, architecture, and platforms in between versions with either missing or incorrect documentation to go with it. These changes are not made to purposefully break outside integrations but are instead made with their own interests in mind. Simply put, IT infrastructures are complex environments with lots of moving parts that need to be carefully integrated to get the best value from the solutions. Often the response from vendors’ support teams will boil down to “not my problem.” Ultimately, a good SOAR vendor will try and keep up with the integrations as new versions are released, but some of this will also come back to a good relationship between you and your vendor. Simply letting them know that a new version released and that you intend to upgrade soon can change the integration team’s process to better support you.
Things to keep in mind
So, what are the main takeaways? SOAR solutions can be incredibly powerful enablers of the cyber and operations teams if some simple rules are followed:
- Stay focused. Choose a singular task to learn what works in your organization. Use this as your inhouse training scenario to learn the process.
- Take your time. Diagram the workflow on a whiteboard and take your time finding the lowest common denominator to help pick one or two use cases to leverage as your showcase.
- Identify simple integrations. Choose the deployed solutions that can be easily integrated to start with. Typically, they will be API driven and allow you to combine with threat enrichment to see immediate benefits.
- Re-use. Ideally, your SOAR platform allows you to reuse the work you’ve already done. You’ve created the first piece of the puzzle for the future and you can leverage that same structure and concept again to reduce the amount of effort on your next workflow.
Merlin Cyber has partnered with Swimlane to help our public-sector customers avoid these and many other challenges that they encounter. Swimlane provides a comprehensive SOAR platform leveraging a drag-and-drop workflow builder that enables organizations to rapidly build and deploy workflows to the field. With built-in case management, auditing, reporting, and a robust integration library, Swimlane provides environments with the tools they need to be successful.
If your organization wants to rapidly improve staff efficiency and drastically decrease MTTR by leveraging a powerful SOAR platform, we can demo Swimlane and help customize a solution that meets your objectives.
Congress is working on another coronavirus relief package and telework measures are among the provisions being discussed. One group of senators is urging Congress to maintain maximum telework for federal employees throughout the pandemic. Another group of senators wants to see additional funding for upgrading agency IT systems. As they await a final bill and begin making decisions on end-of-fiscal-year dollars, federal agencies should strongly consider investments that enable effective telework, at scale, for the foreseeable future.
1E’s Tachyon is one such investment. Tachyon is a real-time, modern endpoint management solution that simultaneously improves employee experience and IT monitoring and remediation of devices. The single-agent platform is efficient, easy to deploy, and entirely API driven. All its capabilities can be leveraged through ServiceNow or used to augment tools such as Splunk and Microsoft Endpoint Manager (MEM). These robust integrations remove the need for multiple agents and provide federal agencies with several benefits.
Seamless Telework Experience
Tachyon gives IT teams enterprise-wide visibility of their devices from a single dashboard. Synthetic “microtransactions” periodically test the impact of a load on the environment to help identify processes that are interfering with normal operations, and how. This helps IT accurately gauge device responsiveness and performance. With so many employees working remotely, the ability to see in real-time who’s working vs. who’s having issues is vital to improving the end-user experience.
Ticketing Workflow Automation & Reduction
Integrating with ServiceNow, Tachyon’s functionality can be accessed directly through a single console for incident tracking and remediation. Help desk staff can diagnose and fix issues directly from the ServiceNow admin page, significantly improving response rates and response times on incidents. Also, with Tachyon running in the background, remote workers get an enhanced version of ServiceNow’s virtual agent that enables self-servicing for common issues.
Real-Time Response & Remediation
IT staff can query endpoints and perform actions in a matter of seconds with Tachyon. When issues pop up, staff can address them by taking real-time control of endpoints across any of their environments. They can also prevent issues from replicating on other devices by setting new enterprise-wide policy controls. This proactive maintenance feature automates many mostly manual IT processes, bringing substantial efficiencies.
Tachyon’s Core Modules
Put Tachyon to the Test
Merlin is currently offering federal agencies a 48-hour implementation of Tachyon against the tool’s two main endpoint use cases: visibility and control. After initial requirements are fulfilled by the customer, the rapid implementation of Tachyon will be structured like this:
Day 1 (Visibility): Setup, pilot group, and testing
- Stand up required infrastructure
- Install 1E client in a pilot group of endpoints
- Test the software in your environment
- Gain complete visibility of all remote devices
Day 2 (Control): Analysis, roll out, and collaboration
- Analyze performance data from Day 1
- Gain control of remote devices and fix any issues
- Expand 1E client beyond pilot endpoints
- Enable core teams to use Tachyon
1E will provide a dedicated solutions expert, at no cost, who will help fast track the deployment of the platform in your environment. This two-day implementation can be used to manage up to 50,000 remote devices.
If your agency needs to modernize its endpoint management to enable maximum telework, scale-up ticketing and remediation with automation and self-service, and maintain a proper security and compliance posture, we can demo Tachyon and customize a solution to meet your objectives.
With the proliferation of cloud-based applications, organizations are faced with complex challenges regarding security as a whole and how to provide controls around the data that now resides somewhere in Neverland. We have moved away from the idea of the workplace’s four walls, complete with well-known kill chains, and find that our data is moving to the cloud at an alarming rate.
Perhaps the largest issue when moving to the cloud is trying to figure out how to secure applications, and users, without adding overhead and complexity. The cloud is supposed to make our lives easier while ensuring that the bad guys can’t get in. On the surface, this seems like an easy fix, especially when you think of it in terms of the existing security infrastructure. Unfortunately, reality sets in, and you begin to see this magical space rapidly becoming a logistical nightmare. How am I going to secure all this? Who is going to vet my users? What happens if an application is compromised, and allows a nefarious user to crawl my properties from east to west? How fired am I going to be at the end?
This is where the DevSecOps approach comes to the rescue. The whole premise of DevSecOps is around placing security controls within the applications themselves. In days gone by, things like admin credentials and cross-application access controls were hard-coded into apps. While this was business as usual for many years, it has increasingly become a highly available attack vector for hackers. When you combine this with known, and previously unknown, CVEs it becomes a glaring loophole in your security posture.
WHITE PAPER: On Your Mark, DevSecOps, Go!
The most common method for addressing pre-production security gaps is to have a human security specialist review the code, perform the STIG process, and apply various toolsets to identify and remediate vulnerabilities. The inherent problem with this process is that security staff are often overrun and facing long backlogs as the Dev team increases the speed at which apps are ready for deployment. Adding Sec to DevOps allows the developers to inject security into the earliest processes and, by doing so, creates self-healing, self-remediating applications that are fully aware of known exploits and continually updated to reflect novel threats in a fully-automated process.
The second problem hard-coded credentials can present is in app-to-app communication. As it stands today, a vulnerable application can be compromised, allowing bad actors to view dependencies, make changes, or otherwise gain access to additional properties, all while masquerading as an approved application. This becomes an enormous concern, as some of the database, app, and user calls could cross multiple applications and provide access to something that may not have robust security controls baked in. Mainframes may be out of vogue, but they are often the legacy central repositories for huge amounts of data, which may only have the ability to provide basic credential authorization. We’re left to rely on the legacy app, while the lift and shift to DevSecOps may not be feasible with today’s technologies. We must secure these apps.
In conclusion, these are some key points to take away for properly securing your applications and users as you move to the cloud:
- DevOps is falling to the wayside. You must look at holistic solutions to inject security as early in the CI/CD pipeline as possible.
- App-to-app security is paramount. If your applications cannot fully vet what it is talking to, it becomes open to compromise.
- Secure your cloud containers. This seems like a no-brainer but be mindful of your cloud architect’s time and workload, with the realization that posture management can be fully automated.
- Apply multi-factor authentication (MFA) to everything. Move security controls as close to the payload as possible. Network segmentation is great, but it allows vulnerabilities to be exploited.
During the COVID-19 outbreak, agencies have shifted much of their workforce to telework. The strain on existing infrastructures has made headlines, whether it be the DoD asking employees to avoid non-essential services while on the VPN or other agencies staggering work schedules and limiting overall Citrix users. Further complicating these issues is the increase in cloud-based resources.
I recently heard from an agency user attempting to participate in a required training session. Even though the training was hosted in the cloud, the user needed to use the overburdened VPN to access it, and the result was poor video quality. The problem is clear: current remote access systems were not scoped for this flood of users.
As it always seems to be, while IT operations and security teams deal with new complexities and challenges, malicious actors see newfound opportunities. CISA recently released a new alert (AA20-073A) that includes the following considerations regarding teleworking:
- As organizations use VPNs for telework, more vulnerabilities are being found and targeted by malicious cyber actors.
- As VPNs are 24/7, organizations are less likely to keep them updated with the latest security updates and patches.
- Malicious cyber actors may increase phishing emails targeting teleworkers to steal their usernames and passwords.
- Organizations that do not use multi-factor authentication (MFA) for remote access are more susceptible to phishing attacks.
- Organizations may have a limited number of VPN connections, after which point no other employee can telework. With decreased availability, critical business operations may suffer, including IT security personnel’s ability to perform cybersecurity tasks.
The COVID-19 stimulus bill passed in March provided agencies the resources necessary to address telework infrastructure and security needs. Rather timely to this funding, there is new guidance from OMB regarding updates to TIC 2.0, providing the ability to use cloud-based solutions to assist with these issues. More specifically, the OMB memorandum regarding TIC 3.0 provides for the following new use case:
Remote Users: This use case is an evolution of the original FedRAMP TIC Overlay (FTO) activities. This use case demonstrates how a remote user connects to the agency’s traditional network, cloud, and the Internet using government-furnished equipment (GFE).
So how can agencies leverage these new TIC 3.0 guidelines to alleviate current strain and security concerns, while future-proofing their investments? TIC 3.0 allows agencies to modernize and move towards embracing a zero trust architecture (ZTA) by removing the outdated “trusted vs. untrusted” model and instead focusing the perimeter around the endpoint. To do this, the focus should be on the following key principles:
- Remove traffic destined for the cloud from current remote access infrastructure, thus lessening the load on the overburdened systems.
- Leverage the scalability and elastic nature of the cloud to deal with any further unexpected surges of remote access.
- Institute the principle of least privilege for remote access to overcome some of the shortcomings of VPN technologies.
- Where possible, move to an “identity as the perimeter” approach, targeting security at the remote user.
- Secure both new and legacy applications as the move to ZTA occurs, thus ensuring critical legacy systems are not left unsecured.
- Provide the least amount of friction to the end-users!
By embracing TIC 3.0 and ZTA, agencies can augment current remote access capabilities (VPN, Remote Desktop, Citrix, etc.) by providing access to cloud applications without the need to use old remote access systems. Further, this can be done alongside the current infrastructure, avoiding the dreaded “rip and replace,” and increasing security along the way.
At Merlin, we scout innovative, emerging technologies and establish technology partnerships that allow us to effectively implement unique remote access strategies that incorporate zero trust principles. As the model below illustrates, we provide end-to-end secure access, leveraging highly scalable and elastic solutions. Using cloud-based and cloud-native technologies like Okta and Netskope Private Access can increase security while lessening the load on remote access infrastructures. Adding Silverfort unique SSO capabilities can bring those legacy systems into the security of today.
While there is no quick fix for legacy remote access systems, agencies can take the first steps in their zero trust journey while augmenting the capacity of current systems and increasing overall security.
Last month, the Government Accountability Office released a new report titled DOD Needs to Take Decisive Actions to Improve Cyber Hygiene. The GAO report found that the Defense Department is behind on three major cyber hygiene initiatives and lacks cybersecurity accountability among its leadership. If a critical government agency like the DOD struggles with cyber hygiene, what about a regular company?
An average-sized company usually has 25-plus security vendors. Organizations have implemented tool after tool in efforts to secure their data, systems, and users. This has left them with misconfigured, repetitive, or siloed tools and an uphill climb toward proper cyber hygiene.
RELATED: 5 of the biggest cyber hygiene myths
While proper cyber hygiene involves tools, training, and policies, having a fragmented toolset makes the concept a non-starter. Tool fragmentation and overlapping tool capabilities put additional burden on IT staff, making it difficult to respond to threats, quantify risks, or effectively manage an organization’s most critical security controls. As a result, the organization’s cyber hygiene suffers.
Poor cyber hygiene creates security vulnerabilities that require decisive action. It’s vitally important to correctly configure, maintain, and ensure that your security tools are effective. In other words, cybersecurity leaders should consider maximizing the ROI on already-purchased tools before adding new ones to their crowded ecosystem.
Tool-proof your cyber hygiene
Practicing proper cyber hygiene goes beyond just purchasing and implementing security tools. Using the tools correctly is what helps solidify overall cybersecurity posture. And it all starts with proper configuration of the tools you have.
Establishing configuration baselines is a fundamental but often overlooked cyber hygiene task. Why else is tool misconfiguration a frequent cause of breaches? While we rely on security tools to maintain proper hygiene, their effectiveness is entirely in our hands.
Here’s how to weigh the performance and usage of existing security tools:
- Analyze if the tools you’re using are engineered properly and behaving correctly. For example, if it’s a vulnerability scanner, is it updated and scanning your entire IT landscape? If it’s a next-generation firewall, are you using all the features appropriately?
- Review and score every tool with a critical eye. Try to rationalize each tool against your organization’s current and future needs. Move past qualitative descriptions and into quantitative analysis by ranking and scoring them with questions like:
- Does this tool have a niche or special purpose?
- Is it more or less secure than other options?
- Examine each tool’s actual configuration. Is it configured securely? Does it have default passwords or other weak controls? How easy is it to harden?
The complexity of today’s IT infrastructures coupled with security tool fragmentation and misconfiguration makes cyber hygiene challenging for companies of all sizes. Security tools are only as strong as an organization’s internal process for maintaining them. Luckily, there are solutions that automate much of the work and provide organizations with a comprehensive way to implement and maintain proper cyber hygiene.