exclusive
content

This week in cybersecurity from the editors at Cybercrime Magazine –Read the Full Story in Cloud Computing Magazine Sausalito, Calif. – Nov. 13, 2024 Microsoft 365 has become the backbone of critical operations for businesses of all sizes in various industries. According to estimates, approximately 345

The post Microsoft 365: Guide To Backup And Recovery. What’s At Risk. appeared first on Cybercrime Magazine.

Source: https://cybersecurityventures.com/microsoft-365-guide-to-backup-and-recovery-whats-at-risk/

This week in cybersecurity from the editors at Cybercrime Magazine –Read the Full Story in CyberOptik Sausalito, Calif. – Nov. 12, 2024 The best cybersecurity websites effectively communicate trust, expertise, and cutting-edge technology through their design and functionality, making them essential components of a cybersecurity company’s success,

The post 20 Best Cybersecurity Website Designs appeared first on Cybercrime Magazine.

Source: https://cybersecurityventures.com/20-best-cybersecurity-website-designs/

In the rapidly evolving landscape of artificial intelligence (AI) and large language models (LLMs), security can no longer be an afterthought. Implementing robust security measures is paramount as these technologies become integral to enterprise operations. However, proper security in AI goes beyond traditional cybersecurity practices — it must also encompass ethical considerations and responsible AI principles. 

This guide provides IT practitioners and decision-makers with a comprehensive approach to applying zero-trust principles in AI and LLM architectures, emphasizing the integration of ethical considerations from the ground up. 

The convergence of security and ethics in AI architecture 

Recent publications, such as the AI ethics principles outlined by Architecture and Governance, highlight the growing recognition that security and ethics in AI are inextricably linked. Ethical AI is secure AI, and secure AI must be ethical. The two concepts are mutually reinforcing and essential for responsible AI development. 

Fundamental ethical principles that must be incorporated into AI architecture include: 

  • Fairness and non-discrimination 
  • Transparency and explainability 
  • Privacy and data protection 
  • Accountability and governance
  • Human-centricity and societal benefit 

These principles should not be considered constraints but fundamental design elements that enhance AI systems’ security, reliability and trustworthiness. 

Implementing zero trust in AI: A holistic approach 

Zero-trust architecture is based on “never trust, always verify.” When applied to AI systems, this approach must encompass traditional security measures and ethical safeguards. Let’s explore how this can be achieved across different aspects of AI systems. 

1. Secure and ethical data pipelines 

The foundation of any AI system is its data. Implementing zero trust in data pipelines involves securing the data and ensuring its ethical use. 

Three critical techniques for securing training data are differential privacy, homomorphic encryption and secure enclaves. 

  • Differential privacy adds controlled noise to the data, preserving privacy while maintaining utility. This technique is beneficial when working with large datasets where individual privacy is crucial. For example, in healthcare AI applications, differential privacy can allow training models on patient data without risking individual patient identification.
  • Homomorphic encryption allows computations on encrypted data, ensuring privacy even during processing. This revolutionary approach means data can remain encrypted throughout the entire AI pipeline, from ingestion to model training and inference. While computationally intensive, advances in this field make it increasingly practical for real-world applications. 
  • Secure enclaves provide a protected environment for data processing, isolated from the rest of the system. Technologies like Intel’s Software Guard Extensions (SGX) create trusted execution environments where sensitive computations can occur, shielded from potential threats in the broader system. 

Ensuring data fairness and representativeness is crucial for building ethical AI systems. This involves rigorous analysis of training data to identify and mitigate biases. Stratified sampling and active learning can help create more balanced and representative datasets. 

Moreover, implementing data governance frameworks that track data lineage and usage is essential. This not only aids in compliance with regulations like GDPR but also ensures that data is used in line with its intended purpose and ethical guidelines. 

2. Secure model training and inference 

Securing the model training and inference processes involves technical security measures and ethical considerations. 

Federated learning is a powerful approach that keeps data decentralized. It trains models on distributed datasets without centralizing the data. This technique is precious in scenarios where data privacy is paramount, such as in healthcare or finance. 

In a federated learning setup, model updates are shared instead of raw data, significantly reducing privacy risks. However, implementing federated learning securely requires careful consideration of potential attacks, such as model inversion or membership inference attacks. 

Incorporating ethical constraints directly into the model training process is crucial for developing responsible AI. This can involve techniques such as: 

  • Fairness-aware machine learning algorithms that optimize for both performance and fairness metrics. 
  • Adversarial debiasing techniques that actively work to remove unwanted biases from model predictions. 
  • Regularization methods that penalize models for exhibiting biased behavior. 

These approaches ensure that ethical considerations are baked into the model from the start rather than being treated as an afterthought. 

3. Secure and responsible model deployment 

Deploying AI models securely and responsibly involves implementing robust access controls, continuous monitoring and ethical guardrails. 

Implementing attribute-based access control (ABAC) for model access is a key component of a zero-trust AI architecture. ABAC allows for dynamic, risk-based access decisions, considering factors such as user role, data sensitivity and request context. 

For instance, a data scientist might be granted access to run inference on a model during work hours from a company device, but that same access might be denied outside of work hours or from an unrecognized device. 

Implementing ethical guardrails ensures that AI models behave responsibly in production. These systems can: 

  • Monitor model inputs and outputs for inappropriate content. 
  • Enforce ethical guidelines by blocking or flagging problematic requests. 
  • Provide audit trails of model usage for compliance and governance. 

By implementing such guardrails, organizations can ensure that their AI systems remain aligned with ethical principles even when interacting with diverse, real-world inputs. 

4. Continuous monitoring and threat detection 

Implement AI-specific threat detection systems that not only identify potential security breaches but also detect ethical violations. 

Anomaly detection systems are crucial in identifying potential security threats or ethical violations in AI systems. These systems can: 

  • Monitor input data for potential adversarial attacks or data poisoning attempts. 
  • Analyze model outputs for unexpected or potentially harmful responses. 
  • Track performance metrics to detect model drift or degradation. 

Advanced anomaly detection systems might employ isolation forests or autoencoders to identify outliers in high-dimensional spaces typical of AI data. 

Monitoring model usage patterns is essential for detecting potential misuse or unethical applications of AI systems. This involves: 

  • Tracking patterns of model queries and usage across users and applications. 
  • Analyzing the context and content of model interactions. 
  • Identifying unusual or suspicious usage patterns that might indicate misuse. 

By implementing robust monitoring and analysis systems, organizations can ensure their AI systems are being used as intended and in line with ethical guidelines. 

Responsible AI: The path forward 

Implementing zero trust in AI and LLM architectures is about security and building responsible, ethical and trustworthy AI systems. By integrating robust security measures with strong moral principles, we can create secure AI systems aligned with societal values and individual rights and are resistant to external threats. 

As we continue to push the boundaries of AI capabilities, let us remember that true innovation in AI is not just about what we can do but what we should do. The future of AI lies not just in its power but in its principle. 

For organizations embarking on this journey, the path forward involves: 

  • Conducting a thorough assessment of current AI systems and practices. 
  • Developing a comprehensive strategy that integrates security and ethical considerations. 
  • Investing in technologies and tools that support secure and responsible AI. 
  • Fostering a culture of responsible AI development across the organization. 
  • Continuously monitoring, evaluating and improving AI systems and practices. 

By taking these steps, organizations can position themselves at the forefront of secure, ethical and responsible AI development — a critical competitive advantage in the AI-driven future. 

Vaibhav Malik is a global partner solution architect at Cloudflare specializing in zero trust security architecture. With over 15 years of experience in networking and security, he helps global partners design and implement effective security solutions. Vaibhav holds a Masters in Telecommunication from the University of Colorado Boulder and an MBA from the University of Illinois Urbana Champaign. He is a recognized industry thought leader who advocates for an identity and data-centric approach to security. 

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects. 

Source: https://www.csoonline.com/article/3604270/implementing-zero-trust-in-ai-and-llm-architectures-a-practitioners-guide-to-secure-and-responsible-ai-systems.html

Multiple US telecommunications companies were hacked into by a People’s Republic of China (PRC)-backed threat actor to carry out a full-blown cyber-espionage attack, according to a joint FBI and CISA statement issued on Wednesday.

During what the FBI is calling a “broad and significant cyber espionage campaign,” the threat actors used compromised networks within these companies to steal customer call records data.

The offense included theft of “private communications of a limited number of individuals who are primarily involved in government or political activity, and the copying of certain information that was subject to US law enforcement requests pursuant to court orders,” the statement added.

The statement is in line with a recent report by WSJ in October, which said a China-backed threat actor, tracked by Microsoft as Salt Typhoon, has hacked into US internet service provider (ISP) networks to steal sensitive US data and establish persistence.

Queries sent to CISA for further details on the investigation did not elicit a response until the publishing of this article.

Affected telcos likely include AT&T, Verizon

While the FBI held out on more technical details of the investigation, adding “our understanding of these compromises to grow as the investigation continues,” WSJ had reported that three leading US broadband providers — AT&T, Lumen Technologies, and Verizon Communications — may have been affected.  

The campaign targeted sensitive US surveillance systems that are used to comply with court-authorized wiretappings that the companies in question have to provide to the FBI and other agencies for criminal and national security investigations.

The hackers may have also targeted the phones of President-elect Donald Trump and running mate JD Vance. 

Days before the official joint statement on Wednesday, several House committees including, the Energy and Commerce, Homeland Security, Intelligence, and Judiciary panels, reportedly received briefings on the campaign.

The initial reporting of the Salt Typhoon campaign had sent investigators looking for signs of compromise in Cisco Systems routers, as they are the core network components that route much of the traffic on the internet.

A Cisco spokesperson, however, had said that a preliminary investigation revealed no such intrusions. Other key TTPs used by China-backed actors include infecting ISPs through zero-days, as in the case of China’s Volt Typhoon, warned against by CISA using Fortinet bugs in espionage campaigns before its takedown by law enforcement in January.

Source: https://www.csoonline.com/article/3605085/us-says-china-conducted-massive-espionage-through-breached-telcos.html

An office worker received an email that appeared to be from a vendor but was caught in quarantine and the user requested its release. It looked innocent enough, so an administrator released the email. The user clicked on the email to review the contents, which included an attached invoice.

That’s where the trouble started: clicking on the attachment launched a website that requested the worker’s username and password, which they dutifully entered. Unfortunately, there was nothing legitimate about the email, which was phishing for just such an opportunity.

But it got worse — the user had unwittingly given the attacker the ability to go one step further and launch an adversary in the middle (AiTM) attack, the ultimate business email compromise that seeks to gain entry to banking or other financial transactions. These attacks not only grab credentials, but they can also snare tokens to bypass multifactor authentication.

AiTM attacks are insidious and can have serious consequences

Several levels of security had failed, and the attackers were now able to infiltrate the network stealthily, impersonate the target and access email conversations and documents in the cloud.

“In a stolen session cookie replay attack, the attacker uses the valid stolen cookie to impersonate the user, circumventing authentication mechanisms of passwords and MFA,” Microsoft notes in its blog on the subject.

“In this campaign, we observed that the attacker signed in with the stolen cookie after a few hours from an IP address based in the United States…. In addition, the attacker generated a new access token, allowing them to persist longer in the environment.”

Once inside, attackers can add new authentication methods to bypass those already in place, often with the goal of building a rule to divert certain mail so that the user or owner of the mailbox doesn’t see it being sent.

Preventing AiTM attacks requires a combination of techniques

To prevent AiTM attacks, Microsoft recommends using security defaults as a baseline set of policies to improve identity security posture. For more granular control, you’ll want to enable conditional access policies; implementing risk-based access policies is particularly helpful.

“Conditional access policies evaluate sign-in requests using additional identity-driven signals like user or group membership, IP location information, and device status, among others, and are enforced for suspicious sign-ins,” according to Microsoft.

“Organizations can protect themselves from attacks that leverage stolen credentials by enabling policies such as compliant devices, trusted IP address requirements, or risk-based policies with proper access control.”

Invest in advanced anti-phishing solutions as a front-line defense, specifically solutions that monitor and scan incoming emails and visited websites. Ensure that you use SmartScreen and other technologies that block malicious websites.

Investigate suspected malicious activities, hunting for sign-in attempts with suspicious characteristics and enable rule sets that look for unusual activity or other more obvious attack processes that identify risky locations, malicious ISPs, unusual user agents, and the use of anonymizer services.

Investigation and clean-up after an AiTM attack

While Microsoft’s AiTM blog discusses what you should do to prevent business email compromise, it’s a bit weak on the specifics of how you should investigate and clean up after the potential attack.

You want to ensure your log files for Microsoft 365 are offloaded to a security event and incident management (SEIM) platform and review the Entra or Azure sign-in logs with interactive and non-interactive logins and review any location that isn’t “normal.”

Note that if the user is on a cellular connection, location may be difficult to determine as normal and may differ geographically from the IP addresses you are used to. It may take some time to correlate what the user was doing and what device they were logging on to. Ensure that you interview and correlate the dates, times, and events with what the user was doing and document accordingly.

Then download the Unified Audit logs at http://compliance.microsoft.com. From here you can investigate what the attacker did, especially if you have a subscription to the full Microsoft 365 suite that includes OneDrive and Teams.

You’ll want to review activities in Outlook, Teams, SharePoint, OneDrive, Power Automate, and any other corporate assets that the user had access to. Ensure that you obtain the logs and keep them for the compromised user in your SEIM or other device.

Determine the depth and severity of the attack

Depending on the impact of the attack, start the cleanup process. Start by forcing a password change on the user account, ensuring that you have revoked all tokens to block the attacker’s fake credentials.

If the consequences of the attack were severe, consider disabling the user’s primary account and setting up a new temporary account as you investigate the extent of the intrusion. You may even consider quarantining the user’s devices and potentially taking forensic-level backups of workstations if you are unsure of the original source of the intrusion so you can best investigate.

Next review all app registrations, changes to service principals, enterprise apps, and anything else the user may have changed or impacted since the time the intrusion was noted. You’ll want to do a deep investigation into the mailbox’s access and permissions. Mandiant has a PowerShell-based script that can assist you in investigating the impact of the intrusion

“This repository contains a PowerShell module for detecting artifacts that may be indicators of UNC2452 and other threat actor activity,” Mandiant notes. “Some indicators are ‘high-fidelity’ indicators of compromise, while other artifacts are so-called ‘dual-use’ artifacts.”

“Dual-use artifacts may be related to threat actor activity, but also may be related to legitimate functionality. Analysis and verification will be required for these.”

Mandiant does caution that the tool “will not Identify a compromise 100% of the time, or tell you if an artifact is legitimate admin activity or threat actor activity.”

OneDrive users should get an additional layer of scrutiny

If your user is accessing Microsoft’s OneDrive, you’ll want to check the file dates of files on the cloud storage to see if anything has been tampered with or impacted by malware. Check Power Automate and Power Apps to determine whether post-exploitation command and control or custom command and control has been set up for the user in question.

Next, ensure that the user’s single-sign-on impact has been limited and that you review the impact at http://myapps.microsoft.com. Then, as with consumer devices noted above, similarly verify that all registered or joined devices for the user are legitimate in http://admin.microsoft.com and http://entra.microsoft.com.

It’s strongly recommended that you implement Center for Internet Security settings based on the license you have. Some of these recommended settings cannot be done with the cheapest Microsoft 365 license and require a Microsoft 365 Business Premium subscription at a minimum.

Source: https://www.csoonline.com/article/3604557/how-to-defend-microsoft-networks-from-adversary-in-the-middle-attacks.html

CISOs with Citrix Virtual Apps and Desktop in their environments should patch two holes that could give an authenticated hacker the ability to escalate privileges and run remote code.

This warning comes after the discovery by researchers at watchTowr of the vulnerabilities, who said that what they described as “a carelessly-exposed MSMQ [Microsoft message queuing] instance” can be exploited, via HTTP, to enable unauthenticated remote code execution against the application.

“There seems to be disagreement between Citrix and watchTowr on the seriousness of the vulnerability,” commented Michelle Abraham, a research director in IDC’s security and trust group, “but remote code execution is a serious issue.”

Hotfixes available

For its part, Citrix issued a security bulletin Tuesday in which it “strongly urges affected customers of Citrix Session Recording to install the relevant updated versions of Citrix Session Recording as soon their upgrade schedule permits.”

Citrix said the two problems are:

  • CVE-2024-8068, a privilege escalation to NetworkService account access, with a CVSS base score of 5.1
  • CVE-2024=8069, a limited remote code execution with privilege of a NetworkService account access issue, also with a CVSS base score of 5.1.

Hotfixes for the current release and long term service releases are available.

CISOs should find out whether they are using this application in their IT environments and determine whether the flaw presents a risk, Abraham told CSO Online in an email, particularly if it is being used on business critical assets. Criticality should factor into the prioritization of remediation.

“Every IT environment is different,” she said, “so the risks and priorities differ. Having a vulnerability management solution that can prioritize and track the remediation workflow is necessary to manage CVEs that may be present in an organization’s IT environment.”

“There have been 29,004 CVE records published in 2024 through the first three quarters of the year,” she added, “more that were published for the whole of 2023.”

The vulnerabilities are in the Session Recording capability of Citrix Virtual Apps and Desktops, which is aimed at letting IT departments deliver a secure work desktop on any device an employee or approved partner uses.

Session Recording creates a video of keyboard and mouse movements that administrators or IT support can use for monitoring, compliance, and troubleshooting. Videos are stored in a Citrix server database folder. The app includes Session Recording Storage Manager, which is a Windows service.

The flaw

But researchers at watchTowr discovered a flaw, outlined in a blog on Tuesday. It said that the Storage Manager receives files via Microsoft Message Queuing (MSMQ) and uses a serialization process to convert session recording data messages into a form that can be interpreted by Windows processes.

The serialization API allows several “terrible” permissions, watchTowr said, including Full Control access to almost any authenticated user. And, the researchers said, Citrix uses .NET’s BinaryFormatter for deserialization.

“Time has told us that using a BinaryFormatter for deserialization is almost always dangerous,” the report said. “It exposes a lot of functionality to whoever can provide it with messages, and while it can be used securely, it provides enough ‘footguns’ that even Microsoft themselves say it shouldn’t be used.”

The report cites a Microsoft paper issued in July detailing the risks of using BinaryFormatter, saying it is not recommended for data processing.

While MSMQ is usually reached via TCP port 1801, which is not open by default in a Citrix environment, there is a bug in MSMQ — CVE-2024-21554 — which supports accessing MSMQ over HTTP. Unfortunately, for some reason HTTP support is enabled when Virtual Apps and Desktop is installed.

Admins who know to look for this can uncheck this option in the Message Queuing menu list in the app.

Knowing all this, the watchTowr researchers built a proof of concept exploit they said could be used by a threat actor.

“This isn’t really a bug in the BinaryFormatter itself, nor a bug in MSMQ,” said watchTowr, “but rather the unfortunate consequence of Citrix relying on the documented-to-be-insecure BinaryFormatter to maintain a security boundary. It’s a ‘bug’ that manifested during the design phase, when Citrix decided which serialization library to use.”

A ‘medium’ risk, says Citrix

In an email to CSO Online, Citrix said it takes reports of security vulnerabilities seriously. Once the company was made aware of this exploit, it worked with watchTowr to validate, reproduce, and mitigate the problem for the protection of customers.

Citrix rates it a “medium” security issue for several reasons:

  • The exploit is limited to Citrix Session Recording server, which is an optional component of a Citrix Virtual Apps and Desktop Deployment.
  • Session Recording Server is typically deployed on a standalone Windows Server. 
  • VDA and other Citrix infrastructure components are not impacted.
  • It is security best practice that Session Recording Server is installed on a trusted machine inside the corporate network, and cannot be reached from the internet. 
  • For the vulnerability reported, the attacker exploits Microsoft MSMQ technology to send malicious objects to the Session Recording server.  This requires the attacker to be on a trusted machine which is the same domain as the Session Recording server. Citrix recommends that customers enable HTTPS integration with Active Directory as the authentication method for communication with MSMQ.
  • If exploits were successfully executed on the Session Recording server, they would run in the less privileged Network Service context, not in the System context.  
  • Session Recording server can be independently updated from other Citrix components.

‘Emergency’ or ‘celebrity’ issue? It’s unclear, says analyst

The seriousness and the difficulty to exploit the vulnerability depends on whether it can be exploited in unauthenticated session or not, said Erik Nost, a Forrester Research senior analyst; the researchers and Citrix currently disagree on that.

 “The best thing for security pros to understand in scenarios like this is that there are both emergency vulnerabilities (where everything needs to be dropped to respond to) and celebrity vulnerabilities (which can get a lot of attention for news outlets/researchers),” he said. “Often times, a vulnerability is an emergency and celebrity, but that’s not always the case. This one seems to be hitting celebrity status, but its not clear yet if it’s an emergency.”

 This sheds light on the need for organizations to have a critical vulnerability response plan, he said, so that they are prepared for both emergency and celebrity vulnerabilities. Even when a vulnerability is not an emergency, it is best practice to have strong inventory of systems and applications so teams can determine if they are impacted. For a celebrity vulnerability, communications can be prepped to customers/internal teams so everyone knows if there is impact, and the scope of the impact. Emergency response warrants these communications as well, but also the remediation efforts.

This story has been updated with comments from Forrester.

Source: https://www.csoonline.com/article/3604865/citrix-admins-advised-to-install-hotfixes-to-block-vulnerabilities.html