Thales banner

Data Protection FAQs

Data Protection Questions & Answers

Enterprises and government entities are committed to digital transformation. For them to succeed, information must be trustworthy and reliable. This is why keeping data secure throughout its lifecycle has become a critical priority.

Thales has for over three decades been a leader in digital security, including encryption; encryption key and "secrets" management; hardware security modules (HSMs); and signing, certificates, and stamping. This Q&A offers brief, accurate definitions of key terms in each of these areas of digital security practice, as well as links for further reading.

What is Encryption Key Management?

Encryption is a process that uses algorithms to encode data as ciphertext. This ciphertext can only be made meaningful again, if the person or application accessing the data has the data encryption keys necessary to decode the ciphertext. So, if the data is stolen or accidentally shared, it is protected because it is indecipherable, thanks to data encryption.

Controlling and maintaining data encryption keys is an essential part of any data encryption strategy, because, with the encryption keys, a cybercriminal can return encrypted data to its original unencrypted state. An encryption key management system includes generation, exchange, storage, use, destruction and replacement of encryption keys.

According to Securosis’s White Paper, "Pragmatic Key Management for Data Encryption":

  • Many data encryption systems don’t bother with “real” key management – they only store data encryption keys locally, and users never interact with the keys directly. Super-simple implementations don’t bother to store the key at all – it is generated as needed from the passphrase. In slightly more complex (but still relatively simple) cases the encryption key is actually stored with the data, protected by a series of other keys which are still generated from passphrases.
  • There is a clear division between this and the enterprise model, where you actively manage keys. Key management involves separating keys from data for increased flexibility and security. You can have multiple keys for the same data, the same key for multiple files, key backup and recovery, and many more choices.

Best practice is to use a dedicated external key management system. There are four types1 :

1. An HSM or other hardware key management appliance, which provides the highest level of physical security

2. A key management virtual appliance

3. Key management software, which can run either on a dedicated server or within a virtual/cloud server

4. Key Management Software as a Service (SaaS)

1https://cpl.thalesgroup.com/resources/encryption/securosis-cracking-confusion-encryption-and-tokenization-data-centers-servers-and-white-paper

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is a Centralized Key Management System?

As organizations deploy ever-increasing numbers of encryption solutions, they find themselves managing inconsistent policies, different levels of protection, and experience escalating costs. The best way through this maze is often to transition into a centralized encryption key management system. In this key management case, and in contrast to the use of hardware security modules (HSMs), the key management system performs only key management tasks, acting on behalf of other systems that perform cryptographic operations using those keys.

The benefits of a centralized key management system include:

  • Unified key management and encryption policies
  • System-wide key revocation
  • A single point to protect
  • Cost reduction through automation
  • Consolidated audit information
  • A single point for recovery
  • Convenient separation of duty
  • Key mobility

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is Storage Encryption?

Storage encryption is the use of encryption for data both in transit and on storage media. Data is encrypted while it passes to storage devices, such as individual hard disks, tape drives, or the libraries and arrays that contain them. Using storage level encryption along with database and file encryption goes a long way toward offsetting the risk of losing your data. Like network encryption, storage encryption is a relatively blunt instrument, typically protecting all the data on each tape or disk regardless of the type or sensitivity of the data.

Storage encryption is a good way to ensure your data is safe, if it is lost. However, it is considered to be more secure to encrypt data in databases at the level of individual files, volumes, or columns. This may even be required for compliance, if data is shared with other users or is subject to specific audit requirements.

Related Articles

Secure your data at rest, comply with regulatory and industry standards and protect your organization’s reputation. Learn how Thales can help:

What is Bring Your Own Key (BYOK)?

While cloud computing offers many advantages, a major disadvantage has been security, because data physically resides with the cloud service provider (CSP) and out of the direct control of the owner of the data. For enterprises that elect to use encryption to protect their data, securing their encryption keys is of paramount importance.

Bring Your Own Key (BYOK) is an encryption key management system that allows enterprises to encrypt their data and retain control and management of their encryption keys. However, some BYOK plans upload the encryption keys to the CSP infrastructure. In these cases, the enterprise has once again forfeited control of its keys.

A best-practice solution to this "Bring Your Own Key" problem is for the enterprise to generate strong keys in a tamper-resistant hardware security module (HSM) and control the secure export of its keys to the cloud, thereby strengthening its key management practices.

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is FIPS 140-2?

FIPS (Federal Information Processing Standard) 140-2 is the benchmark for validating the effectiveness of cryptographic hardware. If a product has a FIPS 140-2 certificate you know that it has been tested and formally validated by the U.S. and Canadian Governments. Although FIPS 140-2 is a U.S./Canadian Federal standard, FIPS 140-2 compliance has been widely adopted around the world in both governmental and non-governmental sectors as a practical security benchmark and realistic best practice.

Organizations use the FIPS 140-2 standard to ensure that the hardware they select meets specific security requirements. The FIPS certification standard defines four increasing, qualitative levels of security:

Level 1: Requires production-grade equipment and externally tested algorithms.

Level 2: Adds requirements for physical tamper-evidence and role-based authentication. Software implementations must run on an Operating System approved to Common Criteria at EAL2.

Contact a specialist about FIPS 140-2

 

Level 3: Adds requirements for physical tamper-resistance and identity-based authentication. There must also be physical or logical separation between the interfaces by which “critical security parameters” enter and leave the module. Private keys can only enter or leave in encrypted form.

Level 4: This level makes the physical security requirements more stringent, requiring the ability to be tamper-active, erasing the contents of the device if it detects various forms of environmental attack.

The FIPS 140-2 standard technically allows for software-only implementations at level 3 or 4 but applies such stringent requirements that none have been validated.

For many organizations, requiring FIPS certification at FIPS 140 level 3 is a good compromise between effective security, operational convenience, and choice in the marketplace.

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is DNSSEC?

The domain name system (DNS) is effectively the Internet’s address book; it enables website names to be matched to their corresponding registered IP addresses. But illicit alteration of web queries can point end users or services to rogue IP addresses and route them to illegitimate servers for the purpose of data theft. The Domain Name System Security Extensions (DNSSEC) have been created in response to this threat. DNSSEC is a mechanism that involves the use of digital signatures to enable servers to authenticate and verify the integrity of DNS responses to queries.

The Role of Hardware Security Modules

Hardware Security Modules (HSMs) enable top level domains (TLDs), registrars, registries, and enterprises to secure critically important signing processes used to validate the integrity of DNSSEC responses across the Internet. They protect the DNS from what are commonly referred to as “cache poisoning” and “man-in-the-middle” attacks. HSMs provide proven and auditable security advantages, enabling proper generation and storage for signing keys to assure the integrity of the DNSSEC validation process.

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is a Credentials Management System?

Organizations require user credentials to control access to sensitive data. Deploying a sound credential management system—or several credential management systems—is critical to secure all systems and information. Authorities must be able to create and revoke credentials as customers and employees come and go, change roles, and as business processes and policies evolve. Furthermore, the rise of privacy regulations and other security mandates increases the need for organizations to demonstrate the ability to validate the identity of online consumers and internal privileged users.

Challenges Associated with Credential Management

  • Attackers who gain control of your credential management system can issue credentials that make them an insider, potentially with privileges to compromise systems undetected.
  • Compromised credential management processes result in the need to re-issue credentials, which can be an expensive and time-consuming process.
  • Credential validation rates can vary enormously and can easily outpace the performance characteristics of a credential management system, jeopardizing business continuity.
  • Business application owners’ expectations around security and trust models are rising and can expose credential management as a weak link that may jeopardize compliance claims.

Hardware Security Modules (HSMs)

Hardware Security Modules (HSMs) are hardened, tamper-resistant hardware devices that strengthen encryption practices by generating keys, encrypting and decrypting data, and creating and verifying digital signatures. Some hardware security modules (HSMs) are certified at various FIPS 140-2 Levels.

While it’s possible to deploy a credential management platform in a purely software-based system, this approach is inherently less secure. Token signing and encryption keys handled outside the cryptographic boundary of a certified HSM are significantly more vulnerable to attacks that could compromise the token signing and distribution process. HSMs are the only proven and auditable way to secure valuable cryptographic material and deliver FIPS-approved hardware protection.

HSMs enable your enterprise to:

  • Secure token signing keys within carefully designed cryptographic boundaries, employing robust access control mechanisms with enforced separation of duties in order to ensure that keys are only used by authorized entities
  • Ensure availability by using sophisticated key management, storage and redundancy features
  • Deliver high performance to support increasingly demanding enterprise requirements for access to resources from different devices and locations

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is Key Management Interoperability Protocol (KMIP)?

According to OASIS (Organization for the Advancement of Structured Information Standards), “KMIP enables communication between key management systems and cryptographically-enabled applications, including email, databases, and storage devices.”

KMIP simplifies the way companies manage cryptographic keys, eliminating the need for redundant, incompatible key management processes. Key lifecycle management — including the generation, submission, retrieval, and deletion of cryptographic keys — is enabled by the standard. Designed for use by both legacy and new cryptographic applications, KMIP supports many kinds of cryptographic objects, including symmetric keys, asymmetric keys, digital certificates, and authentication tokens.

KMIP was developed by OASIS, which is a global nonprofit consortium that works on the development, convergence, and adoption of standards for security, Internet of Things, energy, content technologies, emergency management, and other areas.

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is an Asymmetric Key or Asymmetric Key Cryptography?

Asymmetric keys are the foundation of Public Key Infrastructure (PKI) a cryptographic scheme requiring two different keys, one to lock or encrypt the plaintext, and one to unlock or decrypt the cyphertext. Neither key will do both functions. One key is published (public key) and the other is kept private (private key). If the lock/encryption key is the one published, the system enables private communication from the public to the unlocking key's owner. If the unlock/decryption key is the one published, then the system serves as a signature verifier of documents locked by the owner of the private key. This system also is called asymmetric key cryptography.

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is a Symmetric Key?

In cryptography, a symmetric key is one that is used both to encrypt and decrypt information. This means that to decrypt information, one must have the same key that was used to encrypt it. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. This requirement that both parties have access to the secret key is one of the main drawbacks of symmetric key encryption, in comparison to public-key encryption.

Asymmetric encryption, on the other hand, uses a second, different key to decrypt information. (See “What is an Asymmetric Key or Asymmetric Key Cryptography?”)

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is the Encryption Key Management Lifecycle?

The task of key management is the complete set of operations necessary to create, maintain, protect, and control the use of cryptographic keys. Keys have a life cycle; they’re created, live useful lives, and are retired. The typical encryption key lifecycle likely includes the following phases:

  • Key generation
  • Key registration
  • Key storage
  • Key distribution and installation
  • Key use
  • Key rotation
  • Key backup
  • Key recovery
  • Key revocation
  • Key suspension
  • Key destruction

Defining and enforcing encryption key management policies affects every stage of the key management life cycle. Each encryption key or group of keys needs to be governed by an individual key usage policy defining which device, group of devices, or types of application can request it, and what operations that device or application can perform — for example, encrypt, decrypt, or sign. In addition, encryption key management policy may dictate additional requirements for higher levels of authorization in the key management process to release a key after it has been requested or to recover the key in case of loss.

Related Articles

Secure your data, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help.

What is a General Purpose Hardware Security Module (HSM)?

Hardware Security Modules (HSMs) are hardened, tamper-resistant hardware devices that strengthen encryption practices by generating keys, encrypting and decrypting data, and creating and verifying digital signatures. Some hardware security modules (HSMs) are certified at various FIPS 140-2 Levels. Hardware security modules (HSMs) are frequently used to:

  • Meet and exceed established and emerging regulatory standards for cybersecurity
  • Achieve higher levels of data security and trust
  • Maintain high service levels and business agility

Find out how general purpose HSMs balance security, high performance and usability

What is a Payment Hardware Security Module (HSM)?

A payment HSM is a hardened, tamper-resistant hardware device that is used primarily by the retail banking industry to provide high levels of protection for cryptographic keys and customer PINs used during the issuance of magnetic stripe and EMV chip cards (and their mobile application equivalents) and the subsequent processing of credit and debit card payment transactions. Payment HSMs normally provide native cryptographic support for all the major card scheme payment applications and undergo rigorous independent hardware certification under global schemes such as FIPS 140-2, PCI HSM and other additional regional security requirements such as MEPS in France and APCA in Australia for example.

Some of their common use cases in the payments ecosystem include:

  • PIN generation, management and validation
  • PIN block translation during the network switching of ATM and POS transactions
  • Card, user and cryptogram validation during payment transaction processing
  • Payment credential issuing for payment cards and mobile applications
  • Point-to-point encryption (P2PE) key management and secure data decryption
  • Sharing keys securely with third parties to facilitate secure communications

Related Articles

What Is Remote HSM Management?

Remote hardware security module (HSM) management enables security teams to perform tasks linked to key and device management from a central remote location, avoiding the need to travel to the data center. A remote HSM management solution delivers operational cost savings in addition to making the task of managing HSMs more flexible and on-demand.

Depending on the HSM solution used, remote HSM management enables:

  • Greater, more flexible control
  • Strong access control based on digital credentials rather than physical keys
  • Stronger audit controls from tracking activities to individual card credentials
  • Quicker identification of remote HSM status issues
  • Simpler software and license upgrade installation
  • Reduced risk of errors
  • Simplified logistics

Related Articles

What is Host Card Emulation (HCE)?

Host card emulation (HCE) is a technology for securing a mobile phone such that it can be used to make credit or debit transactions at a physical point-of-sale (POS) terminals. With HCE, critical payment credentials are stored in a secure shared repository (the issuer data center or private cloud) rather than on the phone. Limited use credentials are delivered to the phone in advance to enable contactless transactions to take place.

This approach eliminates the need for Trusted Service Managers (TSMs) and shifts control back to the banks. However, it brings with it a different set of security and risk challenges.

  • A centralized service to store many millions of payment credentials or create one-time use credentials on demand creates an obvious point of attack. Although banks have issued cards for years, those systems have largely been offline and have not requiring round-the-cloud interaction with the payment token (in this case a plastic card). HCE requires these services to be online and accessible in real-time as part of individual payment transactions. Failure to protect these service platforms places the issuer at considerable risk of fraud.
  • Although the phone no longer stores payment credentials, it still plays three critical security roles, all of which create opportunities for theft or substitution of credentials or transaction information.
    • It provides the means for applications to request card data stored in the HCE service.
    • It is the method by which a user is authenticated and authorizes the service to provide the payments credentials.
    • It provides the communications channel over which payment credentials are passed to the POS terminal.
  • All mobile payments schemes are more complex than traditional card payments, yet smart phone user expectations are extremely high.
    • Poor mobile network coverage can make HCE services inaccessible.
    • Complex authentication schemes lead to errors.
    • Software or hardware incompatibility can stop transactions.

HCE Overview

Related Articles

What is Root of Trust?

Root of Trust (RoT) is a source that can always be trusted within a cryptographic system. Because cryptographic security is dependent on keys to encrypt and decrypt data and perform functions such as generating digital signatures and verifying signatures, RoT schemes generally include a hardened hardware module. A principal example is the hardware security module (HSM) which generates and protects keys and performs cryptographic functions within its secure environment.

Because this module is for all intents and purposes inaccessible outside the computer ecosystem, that ecosystem can trust the keys and other cryptographic information it receives from the root of trust module to be authentic and authorized. This is particularly important as the Internet of Things (IoT) proliferates, because to avoid being hacked, components of computing ecosystems need a way to determine information they receive is authentic. The RoT safeguards the security of data and applications and helps to build trust in the overall ecosystem.

RoT is a critical component of public key infrastructures (PKIs) to generate and protect root and certificate authority keys; code signing to ensure software remains secure, unaltered and authentic; and creating digital certificates for credentialing and authenticating proprietary electronic devices for IoT applications and other network deployments.

What is a Digital Certificate?

Digital certificates are the credentials that facilitate the verification of identities between users in a transaction. Much as a passport certifies one’s identity as a citizen of a country, the purpose of a digital certificate is to establish the identity of users within the ecosystem. Because digital certificates are used to identify the users to whom encrypted data is sent, or to verify the identity of the signer of information, protecting the authenticity and integrity of the certificate is imperative in order to maintain the trustworthiness of the system. In order to bind public keys with their associated user (owner of the private key), public key infrastructures (PKIs) use digital certificates.

What is a Certificate Authority?

A Certificate Authority (CA) is the core component of a public key infrastructure (PKI) responsible for establishing a hierarchical chain of trust. CAs issue the digital credentials used to certify the identity of users. CAs underpin the security of a PKI and the services they support, and therefore can be the focus of sophisticated targeted attacks. In order to mitigate the risk of attacks against Certificate Authorities, physical and logical controls as well as hardening mechanisms, such as hardware security modules (HSMs) have become necessary to ensure the integrity of a PKI.

What is Code Signing?

In public key cryptography, code signing is a specific use of certificate-based digital signatures that enables an organization to verify the identity of the software publisher and certify the software has not been changed since it was published.

Digital signatures provide a proven cryptographic process for software publishers and in-house development teams to protect their end users from cybersecurity dangers, including advanced persistent threats (APTs), such as Duqu 2.0. Digital signatures ensure software integrity and authenticity. Digital signatures enable end users to verify publisher identities while simultaneously validating that the installation package has not been changed since it was signed. All modern operating systems look for and validate digital signatures during installation, and warnings about unsigned code can cause end users to abandon installation.

What is a Digital Signature?

Digital signatures provide a proven cryptographic process for software publishers and in-house development teams to protect their end users from cybersecurity dangers, including advanced persistent threats (APTs), such as Duqu 2.0. Digital signatures ensure the integrity and authenticity of software and documents by enabling end users to verify publisher identities while validating that the code or document has not been changed since it was signed.

Digital signatures go beyond electronic versions of traditional signatures by invoking cryptographic techniques to dramatically increase security and transparency, both of which are critical to establishing trust and legal validity. As an application of public key cryptography, digital signatures can be applied in many different settings, from a citizen filing an online tax return, to a procurement officer executing a contract with a vendor, to an electronic invoice, to a software developer publishing updated code.

What is Time Stamping?

Time stamping is an increasingly valuable complement to digital signing practices, enabling organizations to record when a digital item—such as a message, document, transaction or piece of software—was signed. For some applications, the timing of a digital signature is critical, as in the case of stock trades, lottery ticket issuance and some legal proceedings. Even when time is not intrinsic to the application, time stamping is helpful for record keeping and audit processes, because it provides a mechanism to prove whether the digital certificate was valid at the time it was used. The growing importance of digital signing solutions has created a corresponding demand for time stamping, so many software programs, such as Microsoft Office, support time stamping capabilities.

The Importance of Security

If time stamping is to add real value, the time stamp must be secure.

Risks Associated with Insecure Time Stamping

  • The inability to trust electronic processes can result in costly paper trails to back up electronic records.
  • By manipulating a computer clock, an attacker can easily compromise a software-based time stamping process—thereby invalidating the overall signing process.
  • Insecure time stamping or digital signing processes can expose organizations to compliance problems and legal challenges.
  • Even after private signing keys and certificates have been revoked, users can still have access to them. Without time stamping, organizations cannot prove whether signatures were created before or after a certificate was revoked.

What is PKI?

The Public key infrastructure (PKI) is the set of hardware, software, policies, processes, and procedures required to create, manage, distribute, use, store, and revoke digital certificates and public-keys. PKIs are the foundation that enables the use of technologies, such as digital signatures and encryption, across large user populations. PKIs deliver the elements essential for a secure and trusted business environment for e-commerce and the growing Internet of Things (IoT).

PKIs help establish the identity of people, devices, and services – enabling controlled access to systems and resources, protection of data, and accountability in transactions. Next generation business applications are becoming more reliant on PKI technology to guarantee high assurance, because evolving business models are becoming more dependent on electronic interaction requiring online authentication and compliance with stricter data security regulations.

The Role of Certificate Authorities (CAs)

In order to bind public keys with their associated user (owner of the private key), PKIs use digital certificates. Digital certificates are the credentials that facilitate the verification of identities between users in a transaction. Much as a passport certifies one’s identity as a citizen of a country, the digital certificate establishes the identity of users within the ecosystem. Because digital certificates are used to identify the users to whom encrypted data is sent, or to verify the identity of the signer of information, protecting the authenticity and integrity of the certificate is imperative to maintain the trustworthiness of the system.

Certificate authorities (CAs) issue the digital credentials used to certify the identity of users. CAs underpin the security of a PKI and the services they support, and therefore can be the focus of sophisticated targeted attacks. In order to mitigate the risk of attacks against CAs, physical and logical controls as well as hardening mechanisms, such as hardware security modules (HSMs) have become necessary to ensure the integrity of a PKI.

Contact a specialist about Public Key Infrastructure (PKI)

 

PKI Deployment

PKIs provide a framework that enables cryptographic data security technologies such as digital certificates and signatures to be effectively deployed on a mass scale. PKIs support identity management services within and across networks and underpin online authentication inherent in secure socket layer (SSL) and transport layer security (TLS) for protecting internet traffic, as well as document and transaction signing, application code signing, and time-stamping. PKIs support solutions for desktop login, citizen identification, mass transit, mobile banking, and are critically important for device credentialing in the IoT. Device credentialing is becoming increasingly important to impart identities to growing numbers of cloud-based and internet-connected devices that run the gamut from smart phones to medical equipment.

Cryptographic Security

Using the principles of asymmetric and symmetric cryptography, PKIs facilitate the establishment of a secure exchange of data between users and devices – ensuring authenticity, confidentiality, and integrity of transactions. Users (also known as “Subscribers” in PKI parlance) can be individual end users, web servers, embedded systems, connected devices, or programs/applications that are executing business processes. Asymmetric cryptography provides the users, devices or services within an ecosystem with a key pair composed of a public and a private key component. A public key is available to anyone in the group for encryption or for verification of a digital signature. The private key on the other hand, must be kept secret and is only used by the entity to which it belongs, typically for tasks such as decryption or for the creation of digital signatures.

The Increasing Importance of PKIs

With evolving business models becoming more dependent on electronic transactions and digital documents, and with more Internet-aware devices connected to corporate networks, the role of a PKI is no longer limited to isolated systems such as secure email, smart cards for physical access or encrypted web traffic. PKIs today are expected to support larger numbers of applications, users and devices across complex ecosystems. And with stricter government and industry data security regulations, mainstream operating systems and business applications are becoming more reliant than ever on an organizational PKI to guarantee trust.

Learn more how PKIs secure digital applications and validate everything from transactions and identities to supply chains

What is certification authority or root private key theft?

The theft of certification authority (CA) or root private keys enables an attacker to take over an organization’s public key infrastructure (PKI) and issue bogus certificates, as was done in the Stuxnet attack. Any such compromise may force revocation and reissuance of some or all of the previously issued certificates. A root compromise, such as a stolen root private key, destroys the trust of your PKI and can easily drive you to reestablish a new root and subsidiary issuing CA infrastructure. This can be very expensive in addition to damaging to an enterprise’s corporate identity.

The integrity of an organization’s private keys, throughout the infrastructure from root to issuing CAs, provides the core trust foundation of its PKI and, as such, must be safeguarded. The recognized best practice for securing these critical keys is to use a FIPS 140-2 Level 3 certified hardware security module (HSM), a tamper-resistant device that meets the highest security and assurance standards.

What is inadequate separation (segregation) of duties for PKIs?

Weak controls over the use of signing keys can enable the certification authority (CA) to be misused, even if the keys themselves are not compromised. A malicious actor might issue malicious certificates that allow a device or user to impersonate a legitimate user and conduct a man in the middle attack, or to digitally sign malware that is then propagated, because it appears to come from a trusted source.

Proper security controls need to be established when designing an organization’s public key infrastructure (PKI). This includes separating CA roles and setting policies so that the operation fails if an individual attempts to perform more than one CA role. Setting up policies and procedures to ensure proper separation of duties, including establishing contingencies when a team member leaves, is critical to the security and integrity of the PKI and must be part of the initial design. It is preferable to implement a technology that enables a technical solution to the separation of duties policy. For example, presentation of an “M of N” smart card set can enforce a robust separation of duties policy by simply not allowing an individual to issue certificates without the presence of, for example, a Security Officer.

What is insufficient scalability in a PKI?

A public key infrastructure (PKI) that fails to factor in the growth of the organization and its users will eventually need to be redesigned as the business scales, meaning lost productivity and customer impact. With new applications coming online daily and many users demanding access via multiple devices, good business planning requires that PKI scalability be considered from the outset.

Many organizations will need more than one certification authority (CA) to meet their growing requirements — certificates are used for logon authentication, digital document signing, email, and more. A root CA can act as the “master” with multiple subordinate CAs covering the various use cases. Alternatively, the organization can plan for scale by establishing multiple root CAs and multiple hierarchies. Regardless of the strategy, the goal is to get it right the first time to ensure an organization’s PKI can keep up with its growing needs.

What is subversion of online certificate validation?

Subversion of online certificate validation processes can enable malicious use of revoked certificates. An attacker who can prevent a certificate from reaching the certificate revocation list can impersonate a legitimate actor and execute malicious activity, while the victim is unaware that he/it is communicating with an illegitimate participant.

Defining certificate authentication policies and procedures is an instrumental part of a public key infrastructure’s (PKI) design. Further, proper execution and enforcement will ensure that revoked certificates — and users — are denied access. While many organizations will use a certificate revocation list (CRL), some might opt for a different approach, such as online certificate status protocol (OCSP) or authentication, authorization and accounting (AAA).

Such decisions need to be part of the initial design discussions based on the needs of the organization. It is worth noting that any private keys deployed in the certificate revocation process need to be protected equally with the keys that form the basis of the issuing process.

What is lack of trust and non-repudiation in a PKI?

A public key infrastructure (PKI) with inadequate security, especially referencing key management, exposes the organization to loss or disruptions, if the organization cannot legally verify that a message was sent by a specific user.

A PKI built with security and integrity at its core can provide you with legal protection in instances, when user activity is in dispute. The secure digital signature provides irrefutable evidence of the message’s sender as well as the time it was sent, but it is only as defendable as the PKI is strong. By demonstrating that signing keys are adequately protected all the way back to the root key, your organization can withstand any legal challenge about the authenticity of a specific user and their actions.

What is GDPR (General Data Protection Regulation)?

Perhaps the most comprehensive data privacy standard to date, the GDPR presents a significant challenge for organizations that process the personal data of EU citizens – regardless of where the organization is headquartered.

Effective as of May 2018, the EU’s General Data Protection Regulation (GDPR) is designed to improve personal data protections and increase organizational accountability for data breaches. With potential fines of up to four percent of global revenues or 20 million EUR (whichever is higher), the GDPR certainly has teeth. No matter where your organization is located, if it processes or controls the personal data of EU residents, it must be in compliance with GDPR, or it will be liable to significant fines and the requirement to inform affected parties of data breaches.

GDPR is expansive and includes the following Chapters and articles:

Chapter 1: General Provisions

  • Article 1: Subject matter and objectives
  • Article 2: Material scope
  • Article 3: Territorial scope
  • Article 4: Definitions

Chapter 2: Principles

  • Article 5: Principles relating to personal data processing
  • Article 6: Lawfulness of processing
  • Article 7: Conditions for consent
  • Article 8: Conditions applicable to child's consent in relation to information society services
  • Article 9: Processing of special categories of personal data
  • Article 10: Processing of data relating to criminal convictions and offences
  • Article 11: Processing which does not require identification

Chapter 3: Rights of the Data Subject

  • Section 1: Transparency and Modalities
  • Article 12: Transparent information, communication and modalities for the exercise of the rights of the data subject
  • Section 2: Information and Access to Data
  • Article 13: Information to be provided where personal data are collected from the data subject
  • Article 14: Information to be provided where personal data have not been obtained from the data subject
  • Article 15: Right of access by the data subject
  • Section 3: Rectification and Erasure
  • Article 16: Right to rectification
  • Article 17: Right to erasure ('right to be forgotten')
  • Article 18: Right to restriction of processing
  • Article 19: Notification obligation regarding rectification or erasure of personal data or restriction of processing
  • Article 20: Right to data portability
  • Section 4: Right to object and automated individual decision making
  • Article 21: Right to object
  • Article 22: Automated individual decision-making, including profiling
  • Section 5: Restrictions
  • Article 23: Restrictions

Chapter 4: Controller and Processor

  • Section 1: General Obligations
  • Article 24: Responsibility of the controller
  • Article 25: Data protection by design and by default
  • Article 26: Joint controllers
  • Article 27: Representatives of controllers not established in the Union
  • Article 28: Processor
  • Article 29: Processing under the authority of the controller or processor
  • Article 30: Records of processing activities
  • Article 31: Cooperation with the supervisory authority
  • Section 2: Security of personal data
  • Article 32: Security of processing
  • Article 33: Notification of a personal data breach to the supervisory authority
  • Article 34: Communication of a personal data breach to the data subject
  • Section 3: Data protection impact assessment and prior consultation
  • Article 35: Data protection impact assessment
  • Article 36: Prior Consultation
  • Section 4: Data protection officer
  • Article 37: Designation of the data protection officer
  • Article 38: Position of the data protection officer
  • Article 39: Tasks of the data protection officer
  • Section 5: Codes of conduct and certification
  • Article 40: Codes of Conduct
  • Article 41: Monitoring of approved codes of conduct
  • Article 42: Certification
  • Article 43: Certification Bodies

Chapter 5: Transfer of personal data to third countries of international organizations

  • Article 44: General Principle for transfer
  • Article 45: Transfers of the basis of an adequacy decision
  • Article 46: Transfers subject to appropriate safeguards
  • Article 47: Binding corporate rules
  • Article 48: Transfers or disclosures not authorised by union law
  • Article 49: Derogations for specific situations
  • Article 50: International cooperation for the protection of personal data

Chapter 6: Independent Supervisory Authorities

  • Section 1: Independent status
  • Article 51: Supervisory Authority
  • Article 52: Independence
  • Article 53: General conditions for the members of the supervisory authority
  • Article 54: Rules on the establishment of the supervisory Authority
  • Section 2: Competence, Tasks, and Powers
  • Article 55: Competence
  • Article 56: Competence of the lead supervisory authority
  • Article 57: Tasks
  • Article 58: Powers
  • Article 59: Activity Reports

Chapter 7: Co-operation and Consistency

  • Section 1: Co-operation
  • Article 60: Cooperation between the lead supervisory authority and the other supervisory authorities concerned
  • Article 61: Mutual Assistance
  • Article 62: Joint operations of supervisory authorities
  • Section 2: Consistency
  • Article 63: Consistency mechanism
  • Article 64: Opinion of the Board
  • Article 65: Dispute resolution by the Board
  • Article 66: Urgency Procedure
  • Article 67: Exchange of information
  • Section 3: European Data Protection Board
  • Article 68: European Data Protection Board
  • Article 69: Independence
  • Article 70: Tasks of the Board
  • Article 71: Reports
  • Article 72: Procedure
  • Article 73: Chair
  • Article 74: Tasks of the Chair
  • Article 75: Secretariat
  • Article 76: Confidentiality

Chapter 8: Remedies, Liability, and Sanctions

  • Article 77: Right to lodge a complaint with a supervisory authority
  • Article 78: Right to an effective judicial remedy against a supervisory authority
  • Article 79: Right to an effective judicial remedy against a controller or processor
  • Article 80: Representation of data subjects
  • Article 81: Suspension of proceedings
  • Article 82: Right to compensation and liability
  • Article 83: General conditions for imposing administrative fines
  • Article 84: Penalties

Chapter 9: Provisions relating to specific data processing situations

  • Article 85: Processing and freedom of expression and information
  • Article 86: Processing and public access to official documents
  • Article 87: Processing of the national identification number
  • Article 88: Processing in the context of employment
  • Article 89: Safeguards and derogations relating to processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes
  • Article 90: Obligations of secrecy
  • Article 91: Existing data protection rules of churches and religious associations

Chapter 10: Delegated Acts and Implementing Acts

  • Article 92: Exercise of the delegation
  • Article 93: Committee procedure

Chapter 11: Final provisions

  • Article 94: Repeal of Directive 95/46/EC
  • Article 95: Relationship with Directive 2002/58/EC
  • Article 96: Relationship with previously concluded Agreements
  • Article 97: Commission Reports
  • Article 98: Review of other union legal acts on data protection
  • Article 99: Entry intro force and application

Key Provisions of Article 32

Some of the key provisions of the GDPR, Article 32 require:

  1. the pseudonymisation and encryption of personal data;
  2. the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
  3. the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
  4. a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

Key Provisions of Article 34

Article 34 of the regulation details what an organization must do to avoid having to notify subjects in case of a breach.

  1. When the personal data breach is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall communicate the personal data breach to the data subject without undue delay.
  2. The communication to the data subject referred to in paragraph 1 of this Article shall describe in clear and plain language the nature of the personal data breach ….
  3. The communication to the data subject referred to in paragraph 1 shall not be required if any of the following conditions are met:
    1. the controller has implemented appropriate technical and organisational protection measures, and those measures were applied to the personal data affected by the personal data breach, in particular those that render the personal data unintelligible to any person who is not authorised to access it, such as encryption;
    2. the controller has taken subsequent measures which ensure that the high risk to the rights and freedoms of data subjects referred to in paragraph 1 is no longer likely to materialise;
    3. it would involve disproportionate effort. In such a case, there shall instead be a public communication or similar measure whereby the data subjects are informed in an equally effective manner.

Related Articles

What Is Pseudonymisation?

Pseudonymisation is generally associated with the EU’s General Data Protection Regulation (GDPR), which calls for pseudonymisation to protect personally identifiable information (PII). According to “Article 4, Definitions” of the Agreed Upon Text of the GDPR:

'Pseudonymisation' means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.

Earlier the same document defines “personal data” and “data subject”:

'Personal data' means any information relating to an identified or identifiable natural person ('data subject'); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.

Related Articles

Why Does PCI DSS Matter?

PCI DSS stands for Payment Card Industry Data Security Standard. Protecting payment-related data is certainly important, but similar concerns about a much wider range of sensitive personal information — such as medical records, criminal backgrounds, and employment information — have elevated the issue of data protection, triggering numerous privacy laws and data-breach- disclosure obligations.

Compliance, of course, is mandatory. Failure to take the appropriate steps would at the very least damage your organization’s reputation and put the enterprise at a competitive disadvantage. Worse, if you experienced a data breach, you’d be hit by fines and accusations of negligence would come thick and fast. Those fines might be levied by the card brands themselves and/or your acquirer (the organization that processes transactions on your behalf and that might be responsible for vouching for your PCI DSS compliance to the payment card brands). You’d also face increased transaction fees and potential litigation.

Avoiding all this trouble makes it easy to see why complying with the PCI DSS is in your organization’s best interest. There’s another benefit: You can use many of the same technologies and processes you use to achieve PCI DSS compliance to protect a wide variety of data across your enterprise.

Note: This material is drawn from PCI Compliance & Data Protection for DummiesThales Limited Edition, by Ian Hermon and Peter Spier.

Related Articles

Why Should My Organization Maintain a Universal Data Security Standard, If It Is Subject to PCI DSS?

Account data can easily find its way into a wide variety of business systems, ranging from transaction processing to customer relationship management and added-value systems, such as loyalty and customer support. The challenge is that all these environments need to be protected to achieve compliance with the PCI DSS. As a result, this standard has a breadth and depth that far exceed those of other privacy and data security mandates. In fact, security experts tend to agree that it also well represents and aligns with industry best practices. Although some aspects of the standard may be new to your organization, it likely addresses areas of genuine risk.

The standard was designed to be applied consistently by all companies around the world, from one-man bands to huge multinational corporations. In practice, however, assessments also have to take legal, regulatory, and business requirements into account.

Note: This material is drawn from PCI Compliance & Data Protection for DummiesThales Limited Edition, by Ian Hermon and Peter Spier.

Related Articles

What Are the Core Requirements of PCI DSS?

The PCI DSS consists of 12 published requirements, which in turn contain multiple sub-requirements. The 12 PCI DSS compliance requirements are organized in six groups as shown in the table below:

PCI DSS Compliance Requirements

GroupRequirements

Build and Maintain a Secure NetworkRequirement 1: Install and maintain a firewall configuration to protect cardholder data.

Requirement 2: Do not use vendor-supplied defaults for system passwords and other security parameters.

Protect Cardholder DataRequirement 3: Protect stored cardholder data.

Requirement 4: Encrypt transmission of cardholder data across open, public networks.

Maintain a Vulnerability Management ProgramRequirement 5: Protect all systems against malware and regularly update antivirus software or programs.

Requirement 6: Develop and maintain secure systems and applications.

Implement Strong Access Control MeasuresRequirement 7: Restrict access to cardholder data by business need to know.

Requirement 8: Identify and authenticate access to system components.

Requirement 9: Restrict physical access to cardholder data.

Regularly Monitor and Test NetworksRequirement 10: Track and monitor all access to network resources and cardholder data.

Requirement 11: Regularly test security systems and processes.

Maintain an Information Security PolicyRequirement 12: Maintain a policy that addresses information security for all personnel.

Note: This material is drawn from PCI Compliance & Data Protection for DummiesThales Limited Edition, by Ian Hermon and Peter Spier.

Related Articles

Can I Use PCI DSS Principles to Protect Other Data?

To become PCI DSS compliant, you’re going to be investing a lot of time and money in building a secure infrastructure and supporting processes to meet PCI DSS security requirements. The PCI DSS is primarily concerned with the protection of cardholder data. What about all the other data that your company handles that has nothing to do with payments? Some of it may benefit from similar levels of protection.

By thinking beyond what you’re doing to meet PCI DSS requirements, you can leverage those security principles to build additional solutions that support your organization’s critical assets. You could do any of the following:

  • Encrypt all the network traffic inside your organization to ensure that only those who need to see the data can do so.
  • Protect all data at rest across your whole enterprise by using encryption and/or tokenization and ensuring that only those who are authorized to decrypt that data have access to it.
  • Protect all sensitive data at the point of capture (the point at which it enters your organization) by encrypting selected fields in the data record.
  • Keep security under your full control by encrypting data and managing the keys locally before sending data to any cloud service provider you use.
  • Implement a layered security approach so that your infrastructure doesn’t have a single vulnerable point of attack, which makes it much more difficult for an attacker (inside or outside your organization) to gain unauthorized access to your data.

If you adopt a security-conscious approach to all data and to data access within your organization, meeting the specific PCI DSS requirements is much simpler.

Note: This material is drawn from PCI Compliance & Data Protection for DummiesThales Limited Edition, by Ian Hermon and Peter Spier.

Related Articles

How Can I Protect Stored Payment Cardholder Data (PCI DSS Requirement 3)?

At the heart of the PCI DSS is the need to protect any cardholder data that you store. The standard provides examples of suitable card holder data protection methods, such as encryption, tokenization, truncation, masking, and hashing. By using one or more of these protection methods, you can effectively make stolen data unusable.

Protecting stored data isn’t a “one size fits all” concept. You should think of PCI DSS Requirement 3 as being the minimum level of security that you should implement to make life as difficult as possible for potential attackers.

Knowing the data storage rules

You need to know all locations where data is stored (a big incentive to minimize your data footprint). Requirement 3 also provides guidance about which data can — and can’t — be stored. One of the best pieces of advice in this requirement is “If you don’t need it, don’t store it.”

Making stored data unreadable

The PCI DSS standard requires you to render a primary account number (PAN) unreadable anywhere it’s stored, including portable storage media, backup devices, and even audit logs (which are often overlooked). The deliberate use of the word unreadable by the PCI Security Standards Council allows the council to avoid mandating any particular technology, which in turn futureproofs the requirements. Despite this fact, Requirement 3.4 provides several options:

  • One-way hashes based on strong cryptography in which the entire PAN must be hashed
  • Truncation, which stores a segment of the PAN (not to exceed the first six and last four digits)
  • Tokenization, which stores a substitute or proxy for the PAN rather than the PAN itself
  • Strong cryptography underpinned by key management processes and security procedures

Managing keys securely

Whatever approach you intend to use to render your stored data unreadable, you need to secure the associated cryptographic keys. Strong encryption is useless, if it’s coupled with a weak key management process. The standard provides detailed guidance on managing keys — guidance that’s significantly similar to the way banks and other financial institutions are required to secure their cryptographic keys. Additional requirements call on you to fully document the way you implement and manage various keys throughout their life cycles.

Your success in managing keys depends on having good cryptographic key custodians: people you trust who won’t collude to attack your systems. These people are required to formally acknowledge that they understand and accept their key-custodian responsibilities.

Also, you must ensure that security policies and operational procedures for protecting stored cardholder data are documented, used, and known to all affected parties within your organization.

Don’t underestimate the critical importance of strong key management, and don’t try to take shortcuts. Your Qualified Security Assessor will find your errors, and attackers may find them too.

Masking the PAN before displaying

The standard provides some very specific advice regarding the display of a PAN: Display the full range of digits (normally, 16) only to those personnel who must view it for business reasons. In all other cases, you must implement masking to ensure that no more than the first six digits and the last four digits of the PAN are displayed.

Note: This material is drawn from PCI Compliance & Data Protection for DummiesThales Limited Edition, by Ian Hermon and Peter Spier. Please refer to it for more detail on these topics.

Related Articles

How Can I Encrypt Account Data in Transit (PCI DSS Requirement 4)?

Sensitive data is quite vulnerable when it’s transmitted over open networks, including the Internet, public or otherwise untrusted wireless networks, and cellular networks. The PCI Security Standards Council takes a very hard line on data in transit, requiring the use of trusted keys/certificates, secure transport protocols, and strong encryption. The council also assigns you the ongoing task of reviewing your security protocols to ensure that they conform to industry best practices for secure communications.

Blocking eavesdroppers

Many potential attackers are eavesdroppers who are trying to exploit known security weaknesses. The PCI DSS includes specific requirements and guidance on establishing connections to other systems:

  • Proceed only when you have trusted keys/certificates in place. You’re expected to validate these keys and/or certificates and to make sure that they haven’t expired.
  • Configure your systems to use only secure protocols, and don’t accept connection requests from systems using weaker protocols or inadequate encryption key lengths.
  • Implement strong PCI DSS encryption for authentication and transmission over wireless networks that transmit card-holder data or that are connected to the cardholder data environment.

Securing end-user messaging

Much of the PCI DSS focuses on protecting PANs. Requirement 4 sets forth some specific rules about transmitting PANs across open networks. As a result, technologies that your organization normally uses (such as end-user messaging technologies) may need to be adapted, replaced, or discontinued when cardholder data is being transmitted. The main constraints of Requirement 4 are as follows:

  • PANs must never be sent unprotected over commercial technologies such as email, instant-messaging, and chat applications.
  • Before using any of these end-user technologies, you must ensure that PANs have been rendered unreadable via strong cryptography.
  • If a third party requests a PAN, that third party must provide a tool or method to protect the PAN, or you must render the number unreadable before transmission.

When you encrypt cardholder data as part of your network communications process, you must define the appropriate security policies and operational procedures. In addition, you must make sure that the relevant documents are kept up to date, made available to, and followed by all relevant people in your organization.

Note: This material is drawn from PCI Compliance & Data Protection for DummiesThales Limited Edition, by Ian Hermon and Peter Spier.

Related Articles

How Can I Restrict Access to Cardholder Data (PCI DSS Requirement 7)?

A considerable portion of the PCI DSS concerns access control mechanisms, which must be sufficiently robust and comprehensive to deliver the protection required for cardholder data.

Requirement 7 of PSS DSS clearly states that you must restrict data access. You have to ensure that critical data can be accessed only by authorized personnel and that you have the appropriate systems and processes in place to limit access based on business needs and job responsibilities. The requirement also calls for you to immediately remove access when access is no longer needed.

Try to keep the number of people who need access to data to the absolute minimum, with access needs identified and documented according to defined roles and responsibilities.

Managing your access policy

The standard requires you to think very carefully about who in your organization has access to system components and the effect of that access on the security of your cardholder data environment. This task becomes much more complex if you have multiple office locations or data centers, or if you use cloud-based service providers to host some of your data.

You’re required to manage your access control policy at quite a granular level, carefully defining the various user roles in your organization (user, administrator, and so on) and specifying which parts of your system and data they can access.

In practice, you need to implement sufficient controls to create a practical, effective access control policy, so spend sufficient planning time to devise the best mechanism to satisfy your needs.

Assigning “least privilege” access rights

The standard is prescriptive in that it forces you to grant “least privilege” access rights to all user accounts with requests for access documented and approved. The logic is that you grant each person only enough access to the various bits of the system or data he or she needs to perform his or her job functions. An administrator, for example, could define an access policy for another user to view the cardholder data, but she herself wouldn’t be able to read the data directly.

Depending on your environment, you may need to address multiple system types and varying levels of access for network, host, and application-level use and administration. This task can prove to be complex when, for example, you need to give multiple types of users different access rights to your databases.

It’s best to disable access to data by default and then enable any access that’s required. This method makes it easier to prevent access-granting mistakes that could lead (in the worst-case scenario) to a data breach.

Revoking data access

When a user has a change of role internally, document the change, and modify that user’s privileges as appropriate. Similarly, when a user leaves your company, you need to document the change and then disable or delete his or her user account in alignment with your organization’s policy and procedure.

An established, consistent process can help ensure strong privilege management. In addition, Thales recommends that you periodically run queries on user accounts to verify account activity. You might run a scheduled script on a quarterly basis, for example.

Note: This material is drawn from PCI Compliance & Data Protection for DummiesThales Limited Edition, by Ian Hermon and Peter Spier.

Related Articles

How Can I Authenticate Access to System Components (PCI DSS Requirement 8)?

Strong security is essential for protecting your systems and data from unauthorized access. Requirement 8 of the PCI DSS contains many elements that you need to address in your access control and password policies for staff members and third parties alike.

Ensuring individual accountability

It’s important to ensure that every user (internal or external) who needs access to your systems has a unique identifier so that no dispute occurs later about who performed a particular task. (For details on handling nonrepudiation, for example, see PCI DSS Requirement 8.1.) Strict enforcement of unique identifiers for each user inherently prevents the use of group-based or shared identities (see PCI DSS Requirements 8.1.5 and 8.5).

You also need to ensure full accountability whenever new users are added, existing credentials are modified, or the accounts of users who no longer need access are deleted or disabled. This accountability includes revoking access immediately for a terminated user, such as an employee who has just left your company (see PCI DSS Requirements 7.1.4 and 8.1.2).

Making access management flexible

Having a compliant user access policy is all well and good, but that policy takes you only part of the way to compliance with the PCI DSS. You’re required to underpin your user access policy with an access management system that spells out various tasks, such as the following:

  • Restricting data access by third parties (such as vendors that require remote access to service or support your systems). Grant access only when those parties need it, and monitor their use of your system. Never offer unrestricted 24/7 access.
  • Locking out users who make multiple unsuccessful login attempts over a specified period (to make automated password attacks more difficult).
  • Making the system unavailable to any user after a specified period of inactivity and requiring a repeat login to continue (to minimize the risk of impersonation).
  • Enforcing multifactor authentication methods (normally, tokens or smart cards) for people who attempt non-console administrative or remote access to cardholder-data-environment system components. This enhanced security approach raises the bar for attackers.

Beefing up authentication

For all types of access, the standard expects a strong authentication system. The standard also provides details on implementing and managing this authentication system. In the case of passwords, for example, PCI DSS Requirement 8.2 directs you to do the following:

  • Use strong cryptography to render all authentication credentials (such as passwords or passphrases) unreadable during transmission and storage on all system components, thereby devaluing data where it’s most vulnerable to an insider attack.
  • Set strict conditions for passwords. As a fundamental requirement, all passwords must be changed every 90 days as a minimum. You must enforce a minimum of seven alphanumeric characters for any given password. The reuse of previous passwords must be prohibited.
  • Supply an initial password to each new user, and require her to change that password the first time she accesses your system.
  • prohibit group shared passwords.

After you establish an authentication policy, provide it to all users to help them understand and follow the requirements.

Note: This material is drawn from PCI Compliance & Data Protection for DummiesThales Limited Edition, by Ian Hermon and Peter Spier.

Related Articles

How Can I Monitor Access to Cardholder Data (PCI DSS Requirement 10)?

If you don’t have precise details on how and when your data is being accessed, updated, or deleted, you’ll struggle to identify attacks on your systems. Also, you’ll have insufficient information to investigate if something goes wrong, especially after a data breach.

Fortunately, PCI DSS Requirement 10 calls for keeping, monitoring, and retaining comprehensive audit logs.

Maintaining audit trails

The standard mandates that certain activities — especially reading, writing, or modifying data (see PCI DSS Requirement 10.2) — be recorded in automated audit trails for all system components. These components include external-facing technologies and security systems, such as firewalls, intrusion-detection and intrusion-prevention systems, and authentication servers.

In addition, the standard describes how to record specific details so that you know the who, what, where, when, and how of all data accesses. Any root or administrator user access, for example, should be logged, especially when a privileged user escalates his privileges before attempting data access.

PCI DSS Requirement 10.4 also calls for all cardholder data environment system components to be configured to receive accurate time-synchronization data. If you don’t already have this capability, you may need to upgrade your systems.

One important piece of information to log is any failed access attempt — a good indicator of a brute-force attack or sustained guessing of passwords, especially if the access log has lots of entries. You must also record additions and deletions, such as increased access rights, lower authentication constraints, temporary disabling of logs, and software substitution (which could be a sign of malware).

Preventing unauthorized modification of logs

After you create your audit logs, you must ensure that the logs are secured in such a way that they can’t be altered. You must use a centralized PCI DSS logging solution (see PCI DSS Requirement 10.5.3) with restricted access and sufficient capacity to retain at least 90 days’ worth of log data from all system components within the cardholder data environment, with the remainder of a full year available for restoration if needed.

Making regular security reviews

As well as ensuring that required details are generated, centrally stored, and secured against unauthorized access or modification, you must monitor your logs and security events on at least a daily basis, with alerts requiring review at any time of day or night (see PCI DSS Requirements 10.6 and 12.10.3). This requirement helps you identify anomalies and suspicious activity.

Thales recommends you consider implementing a centralized logging solution that accounts for future capacity and includes reporting tools.

Note: This material is drawn from PCI Compliance & Data Protection for DummiesThales Limited Edition, by Ian Hermon and Peter Spier.

Related Articles

How Can I Make Stored PAN Information Unreadable?

Following are some of the most popular methods for rendering stored information — especially primary account numbers (PANs) — unreadable.

Masking

Masking relates to maintaining the confidentiality of data when it’s presented to a person. The process is familiar to anyone who has used a payment card in a restaurant or shop and then checked the printed receipt; certain digits of the PAN are shown as Xs rather than the actual digits (see figure below). Per PCI DSS Requirement 3.3, PAN display should be limited to the minimum number of digits necessary to perform job functions and should not exceed the first six and last four digits.

Masking a PAN for display purposes.

masking

Source: Thales

Truncation

Truncation renders stored data unreadable by ensuring that only a subset of the complete PAN is stored. As in masking, no more than the first six and last four digits can be stored.

Truncating a PAN

truncating

Source: Thales

One-Way Hashing

A hash function is a well‑defined, provably secure cryptographic process that converts an arbitrary block of data (in this case, a PAN) to a different, unique string of data. In other words, every PAN yields a different result. The one‑way hash process is irreversible (which is why it’s called one-way); it’s commonly used to ensure that data hasn’t been modified, because any changes in the original block of data would result in a different hash value.

The figure below illustrates the use of the hash function in the context of the PCI DSS. The technique provides confidentiality (it’s impossible to re‑create a PAN from a hashed version of that PAN), but like truncation, it makes using the stored data for subsequent transactions impossible.

One-way hash of a PAN

hash

Source: Thales

You can’t retain truncated and hashed versions of the same payment card within your cardholder data environment unless you implement additional controls to ensure that the two versions can’t be correlated to reconstruct the PAN.

Tokenization

Tokenization is a process that replaces the original PAN with surrogate data — a token that may look like a legitimate PAN but has no value to an attacker. In most implementations, the process is reversible; tokens can be converted back to the original PANs on request. Tokenization is used when stored PANs need to be accessible for subsequent transactions.

You can create tokens in a variety of ways. Following are two common approaches:

  • Tokens calculated directly from the original PAN value: This method yields the same token for each given PAN in a process that’s said to be deterministic.
  • Tokens generated randomly: This method yields different tokens every time except when an exhaustive lookup of previous PANs is made so that a previously issued token can be reused.

The degree to which the tokenization process is deterministic can be important in certain scenarios. Everything depends on how the tokens are being used. In some cases, it’s desirable to preserve not just the format of the PAN during the tokenization process, but also certain digits of the PAN (see the figure below).

Tokenization of a PAN (last four digits preserved)

tokenize

Source: Thales

Encryption

In some ways, the goals of encryption are similar to those of tokenization, in that PAN data is replaced by data that has no intrinsic value to an attacker. Encryption uses standardized cryptographic algorithms and keys to derive the encrypted PAN from the original data. The algorithms are widely known, so the security of the process hinges on the strength and handling of the cryptographic keys, which is why hardware security modules are widely involved.

The encryption process generally changes the format of the data. Typically, data size increases, when that data is encrypted. For the same reason that tokenization attempts to preserve the format of the original PAN data — to minimize changes in existing systems that come into contact with the data — organizations often employ format‑preserving encryption (FPE; see the figure below).

Encryption of a PAN (with FPE and preservation of last four digits).

enscryption

Source: Thales

Related Articles

How Do I Extend my Existing Security and Data Controls to the Cloud?

Beyond managing risk through contracts (Section 2.1 Data Governance, CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0), you can exercise control over your data stored within cloud resources. Several cloud services are intended to overlap or replicate from your on premise systems to cloud services, allowing greater consistency in management and data governance.

Identity Management is central to this approach, with Domain 12 of the guidance outlining strategies for replicating or sharing identities, as well as access control options like Single Sign-On and Federated Identity across cloud providers. Supplementary controls over data access are provided by bring your own key (BYOK); you can import your own keys into software based key management systems, or into dedicated Hardware Security Modules provided by the cloud vendor.

Note: This material is drawn from Thales White Paper: “Best Practices for Secure Cloud Migration. Leveraging Cloud Security Alliance Security Guidelines.”

Related Articles

How Do I Protect Data as I Move and Store it in the Cloud?

There are three basic strategies to accomplish this:

  1. Encrypt data prior to transport
  2. Use encryption with both transport and storage services
  3. Use data-centric security

Subsection 5.1.2 of CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0 is meant to show how each of these strategies works when moving—and using— data in the cloud. The idea is that you want to define your data governance strategy and understand the trade-offs of these methods, prior to implementation.

Section 11 of the guidance discusses the specific technologies to support each of these strategies. If you choose to encrypt prior to moving data to the cloud, or have an enterprise-wide encryption solution in place, you’ll either want to mirror on premise keys and encryption capabilities for data access in the cloud, blend on-premise with cloud-native services, or bring your existing encryption to the cloud in place of cloud-native services.

If you choose to encrypt at the services layer, for transport (e.g., TLS, VPN) and data storage (e.g., volume, object, database), you can leverage cloud native capabilities or your preferred encryption solution to secure each service that data comes into contact with. Data-centric security tools like masking and tokenization can transform data prior to cloud migration.

While some static masking solutions are non-reversible, if you need to reverse tokens into original data values, you will either need to do so on premise or bring your existing tokenization service to the cloud for de-tokenization requests. But any of these three approaches will provide secure transport and storage of data and can be used to replicate information to multiple cloud service models.

Note: This material is drawn from Thales White Paper: “Best Practices for Secure Cloud Migration. Leveraging Cloud Security Alliance Security Guidelines.”

Related Articles

How Do I Ensure the Cloud Provider Does Not Access my Data?

Most cloud providers are just as fearful of rogue administrators accessing your data as you are, as this type of ‘Black Swan’ event could severely affect their reputations and valuations. As such they go to great lengths to ensure their administrators cannot access customer data, encryption keys and systems without prior approval and full audit controls. But it remains a risk, however small.

More probable is the risk that the cloud vendor be compelled to provide access under court order described in Domain 3: Legal Issues, Contracts and Electronic Discovery of CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0. Your Risk Management (Domain 2) and Information Governance (Domain 5) plans will need to account for these risks.

For extreme cases where you must minimize or exclude all access to your data by the cloud provider or hostile external parties, combinations of cloud services, bring your own encryption, and data management controls such as tokenization with data masking as a form of data redaction, can provide full segregation and protection.

Most Infrastructure as a Service providers now offer—at an added expense for compute nodes—"Trusted Execution Environments.” Code and data are passed fully encrypted to these servers and only decrypted below the hypervisor layer, as it’s loaded into secure hardware, so no other processes may examine—or alter—the data or code.

Couple trusted execution with the ability to either bring your own encryption, bring your own keys (e.g., BYOK for SaaS, PaaS, IaaS as described in Domain 11) and key management software (e.g., Bring Your Own Encryption for PaaS/IaaS as described in Domain 10 and 11), and you have full control over data storage and data in use.

Note: This material is drawn from Thales White Paper: “Best Practices for Secure Cloud Migration. Leveraging Cloud Security Alliance Security Guidelines.”

Related Articles

Can I Use my own Encryption Keys in the Cloud?

Yes you can.

Many major SaaS, PaaS and IaaS vendors offer the ability to import keys from your on-premises HSM into a key vault or cloud HSM, fully described in Domain 11 of CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0. The level of integration varies depending on cloud vendors and whether or not you opt for on premises or cloud HSMs. You may need to manually perform the import, but you are provided up to FIPS 140-2 Level 3 security. From there the cloud provider derives keys from the master key you imported to encrypt data contained in various services (e.g., object, volume, database).

Note: This material is drawn from Thales White Paper: “Best Practices for Secure Cloud Migration. Leveraging Cloud Security Alliance Security Guidelines.”

Related Articles

How Do I Enforce Data Residency Policies in the Cloud and, Specifically, Comply with GDPR?

CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0 dedicates a significant portion of Domain 3 (Legal Issues, Contracts and Electronic Discovery) to outline your responsibilities for EU security concerns in general, and GDPR compliance specifically. This provides a good roadmap of what data you need to account for and what controls to implement.

Thales recommends that the basic controls you use for any regulated Personally-Identifiable Information (PII) are a good place to start with GDPR, because the controls and types of data are similar. This is briefly discussed in Domain 11. We also recommend the use of Identity Management, encryption and key management for multiple mechanisms to enforce the Cross-Border Data Transfer Restrictions, so that, in the event data is moved, it can be rendered inaccessible. You will need to collect both cloud logs for access controls, as well as the logs from your own applications and services, to fulfill your requirement on Accountability.

The guidance has extensive comments on what logs to collect, and how to create secure logging architectures and monitoring behavior from logs in Domain 7 (infrastructure Security), Domain 9 (Incident Response), and Domain 10 (Application Security).

Note: This material is drawn from Thales White Paper: “Best Practices for Secure Cloud Migration. Leveraging Cloud Security Alliance Security Guidelines.”

Related Articles

How Do I Track and Monitor Data Access and Usage in the Cloud?

Monitoring is discussed in almost every domain of the CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0, but very few concrete examples of how to accomplish monitoring are provided. Also unstated is that logging capabilities are somewhat new for most public cloud vendors, and monitoring these logs for security related events or compliance reports is decidedly nascent. Cloud vendors are getting better at it, but the log files seldom represent a full picture of activity.

To be realistic, if you want to monitor in the cloud, you will need a blend of cloud and third party tools. The primary need is to collect a combination of the service logs and the identity logs provided by the cloud, in addition to log files from the servers, containers and applications you run. This means you will need to leverage all sources and possibly even use a data warehouse or logging tool to supplement event storage.

The good news is some of the clouds now provide the ability to filter and route the events they generate, and they offer the ability to create basic security policies that, in effect, monitor cloud events, and provide alerts when conditions are witnessed within the logs. Again, these are basic monitoring capabilities, and it is likely that you will either need to move a portion of the log data back on premises to monitor, alert and generate reports or create that infrastructure in the cloud.

It is common to see application logs, syslog and web gateway events all streamed to a Hadoop cluster, Elastic Stack, Splunk or even SIEM installations running in the cloud. These installations then leverage the same reporting and analytics capabilities used on premises and provide consistent reporting.

Note: This material is drawn from Thales White Paper: “Best Practices for Secure Cloud Migration. Leveraging Cloud Security Alliance Security Guidelines.”

Related Articles

Can I Secure Containers in the Cloud or across Different Clouds?

Container security is covered briefly in Domain 8 (Virtualization and Containers) of CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0. Specifically Section 8.1.4 touches on four areas:

 

  1. Infrastructure security
  2. Management plane
  3. Image repository
  4. Container content security

Infrastructure is critical, as a poorly secured OS allows access to all data and secrets on a server, or can even take control of the server itself.

Container management is typically performed by what are called “Orchestration Managers,” the most common of which are Kubernetes and Swarm. Both are non-cloud native and, unfortunately, very insecure by default. Bootstrapping new containers requires issuing credentials and secrets to access data needed to operate. Image repositories, both from major vendors and cloud-native systems, do provide secure image stores as well as digital signature capabilities to ensure container images have not been tampered with.

While the guidance gives a few road signs directing you to areas that need attention, it lacks tools and specifics instructions. To close these gaps the guidance recommends leveraging secrets management technologies to issue credentials to containers at runtime, and transparent disk or file encryption to store sensitive data only accessible by the containers you deem appropriate.

The guidance also recommends leveraging code/container signature systems provided by the container repository, and enforcing that the container orchestration system can only use approved containers in the registry. And, if you specify your own OS to run containers atop, just as Domain 8 advises for virtual servers, you need to spend considerable time making sure the OS is a secure variant configured for container use. Cloud Identity and Access controls will gate who can access or administer both the containers and the surrounding container infrastructure and security tools.

The cloud vendor will offer logs for access which you can bundle with orchestration logs to examine activity.

Note: This material is drawn from Thales White Paper: “Best Practices for Secure Cloud Migration. Leveraging Cloud Security Alliance Security Guidelines.”

Related Articles

How Do I Secure my Data in a Multi-Tenant Cloud Environment?

Security in a multi-tenant environment begins with asking questions of your potential cloud service providers (CSPs). A consistent tool you can use to compare multiple vendors of a multi-tenant solution is the Consensus Assessment Initiative Questionnaire (CAIQ) from the Cloud Security Alliance. You can provide the questionnaire to each vendor and compare their answers, apples-to-apples. The CAIQ is divided into various “Security Control Domains,” which can educate you, the user, as well as enable you to get objective information from the multi-tenant providers. It’s up to you to decide how much of the questionnaire with which your selected vendor must comply.

If you can’t find sufficient security in a multi-tenant environment, some vendors provide single-tenant versions of their multi-tenant offering:

  • Microsoft is leading this charge with Azure Stack, a single-tenant version of Microsoft Azure.
  • AWS is rumored to offer a single-tenant version of AWS. You might have to be a very big customer to hear about it.
  • And, of course, Thales offers CipherTrust Cloud Key Manager as a multi-tenant cloud service, but we also offer it as a single-tenant version.

So, if you can’t get a single-tenant solution, in the best case you can gain assurances from your multi-tenant provider that all data is encrypted, and you can hold the keys.

Related Articles

What is the Shared Security Model?

The shared responsibility model is a well accepted tool to help raise awareness that while cloud providers are responsible for the security of the cloud, cloud buyers are responsible for security of their data in the cloud.1

You’re almost certainly responsible for the security of data on your premises and in the cloud. As your workloads migrate to multiple cloud providers, questions you likely will want to address to ensure you are confident in the security of your data in the cloud include:

  • Are you in compliance with internal and industry data protection mandates?
  • Is your data protected in the event of a subpoena issued to your cloud provider?
  • Can you securely move data quickly from one cloud provider to the next?

Image removed.

Related Articles

1See: various Shared Responsibility Models: Amazon Web ServicesMicrosoft Azure

What is the Cloud Security Alliance?

According to the Cloud Security Alliance (CSA):

[It] is the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment. CSA harnesses the subject matter expertise of industry practitioners, associations, governments, and its corporate and individual members to offer cloud security-specific research, education, certification, events and products. CSA’s activities, knowledge and extensive network benefit the entire community impacted by cloud — from providers and customers, to governments, entrepreneurs and the assurance industry — and provide a forum through which diverse parties can work together to create and maintain a trusted cloud ecosystem.1

Thales is a member of CSA.

Related Articles

1https://cloudsecurityalliance.org/about/

What is the Cloud Controls Matrix?

While organizations have realized the benefits of cloud computing, many are still defining their long-term cloud security strategies and adapting to changing business requirements. The Cloud Security Alliance's (CSA) "Cloud Controls Matrix" can help you define your requirements when developing or refining your enterprise cloud security strategy.

According to the CSA:

The Cloud Security Alliance Cloud Controls Matrix (CCM) is specifically designed to provide fundamental security principles to guide cloud vendors and to assist prospective cloud customers in assessing the overall security risk of a cloud provider. The CSA CCM provides a controls framework that gives detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in 13 domains. The foundations of the Cloud Security Alliance Controls Matrix rest on its customized relationship to other industry-accepted security standards, regulations, and controls frameworks such as the ISO 27001/27002, ISACA COBIT, PCI, NIST, Jericho Forum and NERC CIP and will augment or provide internal control direction for service organization control reports attestations provided by cloud providers.1

Related Articles

1 https://cloudsecurityalliance.org/group/cloud-controls-matrix/#_overview

What is the Consensus Assessment Initiative Questionnaire?

Security in a multi-tenant environment begins with asking questions of your potential cloud service providers (CSPs). A consistent tool you can use to compare multiple vendors of a multi-tenant solution is the Consensus Assessment Initiative Questionnaire(CAIQ) from the Cloud Security Alliance. You can provide the questionnaire to each vendor and compare their answers, apples-to-apples. The CAIQ is divided into various “Security Control Domains,” which can educate you, the user, as well as enable you to get objective information from the multi-tenant providers. It’s up to you to decide how much of the questionnaire your selected vendor must comply with.

Related Articles

What is SalesForce Shield Platform Encryption?

Salesforce Shield Platform Encryption enables enterprises using Salesforce to natively encrypt data at rest across their Salesforce apps without compromising business functionality. Thales Vormetric offerings help organizations safely store, manage, and maintain the Salesforce tenant secrets used to derive the encryption keys that protect data within the Salesforce environment, and to meet compliance and best practice requirements for management of these encryption keys. Bring Your Own Key (BYOK) places control of Salesforce encrypted data firmly in the hands of customers by controlling the final form of Salesforce encryption keys.

Compliance mandates, data residency requirements, government regulations and best practices require that enterprises using Salesforce Shield Platform Encryption protect and maintain encryption keys in accordance with specific frameworks and laws. To meet the requirements of these frameworks and laws, enterprises must also meet specific maintenance and storage requirements for tenant secrets as the controlling element for Salesforce encryption keys.

Related Articles

What is Multi-Cloud Key Management?

IaaS/PaaS- and SaaS-provider encryption enables enterprises using cloud providers to secure data at rest with encryption across their cloud workloads without compromise to business functionality. Thales CipherTrust Cloud Key Manager adds controls that enable organizations to help meet compliance and best-practice requirements by generating, storing, managing and maintaining data encryption keys within a secure environment.

In order to meet compliance mandates, data residency requirements and best practices, enterprises using cloud provider encryption may need to address some additional requirements for managing keys:

  • Encryption key material storage separated from key usage locations
  • Customer management of key creation, rotation, deactivation and destruction
  • Separation of duties for key management based upon organization and locale
  • Auditing of encryption key management, usage and access

CipherTrust Cloud Key Manager enables organizations to easily meet these requirements, while making use of vendor encryption while simplifying encryption key management tasks.

Related Articles

What Are the Key Requirements of IoT Security?

The key requirements for any IoT security solution are:

  • Device and data security, including authentication of devices and confidentiality and integrity of data
  • Implementing and running security operations at IoT scale
  • Meeting compliance requirements and requests
  • Meeting performance requirements as per the use case

Key Functional Blocks

IoT security solutions need to implement the functional blocks listed below as interconnected modules, not in isolation, to meet the IoT scale, data security, device trust and compliance requirements.

  • Device Trust: Establishing and managing Device Identity and Integrity
  • Data Trust: Policy driven end-to-end data security, privacy from creation to consumption
  • Operationalizing the Trust: Automating and interfacing to the standards based, proven technologies/products. E.g. PKI products

Note: This material was drawn from “Healthcare Iot Security Blueprint: Requirements, Components and Guidelines.”

Related Articles

What Do Connected Devices Require to Participate in the IoT Securely?

To securely participate in the IoT, each connected device needs a unique identification – even before it has an IP address. This digital credential establishes the root of trust for the device’s entire lifecycle, from initial design to deployment to retirement.

Thales uses hardware security modules (HSMs), combined with supporting security applications from Thales technology partners, to enable manufacturers to provide each device a unique ID using the strongest cryptographic processing, key protection, and key management available. A digital certificate is injected into each device to enable:

  • Authentication of each device introduced to the organization’s architecture
  • Verification of the integrity of the operating system and applications on the device
  • Secure communications between devices, gateway, and cloud
  • Authorized software and firmware updates, based on approved code

Are There Security Guidelines for the IoT?

A number of organizations have developed security guidelines for the IoT. These include:

Why Is Device Authentication Necessary for the IoT?

Strong IoT device authentication is required to ensure connected devices on the IoT can be trusted to be what they purport to be. Consequently, each IoT device needs a unique identity that can be authenticated when the device attempts to connect to a gateway or central server. With this unique ID in place, IT system administrators can track each device throughout its lifecycle, communicate securely with it, and prevent it from executing harmful processes. If a device exhibits unexpected behavior, administrators can simply revoke its privileges.

Why Is Secure Manufacturing Necessary for IoT Devices?

IoT devices produced through unsecured manufacturing processes provide criminals opportunities to change production runs to introduce unauthorized code or produce additional units that are subsequently sold on the black market.

One way to secure manufacturing processes is to use hardware security modules (HSMs) and supporting security software to inject cryptographic keys and digital certificates and to control the number of units built and the code incorporated into each.

Why Is Code Signing Necessary for IoT Devices?

To protect businesses, brands, partners, and users from software that has been infected by malware, software developers have adopted code signing. In the IoT, code signing in the software release process ensures the integrity of IoT device software and firmware updates, and defends against the risks associated with code tampering or code that deviates from organizational policies.

In public key cryptography, code signing is a specific use of certificate-based digital signatures that enables an organization to verify the identity of the software publisher and certify the software has not been changed since it was published.

For more on code signing, see “What is Code Signing?” and the articles below.

What is IoT PKI?

Today there are more things (devices) online than there are people on the planet! Devices are the number one users of the Internet and need digital identities for secure operation. As enterprises seek to transform their business models to stay competitive, rapid adoption of IoT technologies is creating increasing demand for Public Key Infrastructures (PKIs) to provide digital certificates for the growing number of devices and the software and firmware they run.

Safe IoT deployments require not only trusting the devices to be authentic and to be who they say they are, but also trusting that the data they collect is real and not altered. If one cannot trust the IoT devices and the data, there is no point in collecting, running analytics, and executing decisions based on the information collected.

Secure adoption of IoT requires:

  • Enabling mutual authentication between connected devices and applications
  • Maintaining the integrity and confidentiality of the data collected by devices
  • Ensuring the legitimacy and integrity of the software downloaded to devices
  • Preserving the privacy of sensitive data in light of stricter security regulations

What Is the 2019 Thales Data Threat Report?

The 2019 Thales Data Threat Report (2019 DTR) presents the results of a global IDC Web-based survey of 1,200 executives with responsibility for or influence over IT and data security. It:

  • Takes the pulse of the security industry, particularly with respect to data threats and breaches
  • Identifies key data security trends and issues

Respondents are from nine countries: Australia, Germany, India, Japan, the Netherlands, New Zealand, the UK, and the U.S., and represent a range of industries, with a primary emphasis on healthcare, financial services, retail, and federal government organizations. Job titles range from C-level executives including CEO, CFO, Chief Data Officer, CISO, Chief Data Scientist, and Chief Risk Officer, to SVP/VP, IT Administrator, Security Analyst, Security Engineer, and Systems Administrator. Respondents represent a broad range of organizational sizes, with the majority ranging from 500 to 10,000 employees. The survey was conducted in November 2018.

Key Findings

Key findings in the 2019 DTR include:

  • Digital transformation is pervasive – and is putting sensitive data at risk
  • Multi-cloud security is the top digital transformation problem for data
  • Tools that reduce multi-cloud data security complexity are critical
  • Encryption technologies are the top tools needed
  • No one is safe from data breaches
  • For many, preventing data breaches is not an IT security spending priority
  • Higher investments in IT security don’t correlate to lower rates of data breaches
  • Regulatory and compliance changes introduce new challenges

The report also examines data security as it relates to:

  • The Cloud
  • Mobile payments
  • Internet of Things
  • Big Data
  • Containers/Docker
  • Blockchain

Later in 2019, the DTR will become available in multiple geographic and industry editions, which currently are available in the 2018 DTR.

Related Articles

What is the 2018 Thales Data Threat Report?

The Thales Data Threat Report presents the results of a comprehensive global survey of data security professionals. It takes the pulse of the security industry, particularly with respect to data threats and breaches. Thales conducts this survey in conjunction with 451 Research. The 2018 report, is based on web and phone interviews conducted in 2017 of 1,200 senior executives in Germany; Japan; India; the Netherlands; Sweden; South Korea; the UK; and the U.S. Most have a major influence on or are the sole decision maker for IT at their respective companies. Respondents represented a number of industries, including: automotive; energy; government; financial services; healthcare; IT; manufacturing; retail; and telecommunications.

In addition to reviewing trends in data threats and compliance, the report looks at:

  • Data breaches experienced
  • Perceived danger of specific kinds of threats (e.g. APT vs. insider threat, etc.)
  • Perceived levels of enterprise vulnerability to data threats
  • Data security spending
  • Motivations for data security spending
  • Barriers to data security adoption
  • Effectiveness ratings of IT security tools

The 2018 Report looks at some topical issues as well, such as digital security for:

  • The cloud and multi-cloud deployments
  • Big data
  • IoT
  • Docker/Containers
  • Artificial intelligence and machine learning
  • Mobile payments
  • Blockchain

Related Articles

The report is available in multiple geographic and industry editions including:

Major Markets

Major Verticals

What is the 2019 Thales Data Threat Report, Federal Edition?

The 2019 Thales Data Threat Report (DTR), Federal Edition is based on a global IDC web-based survey of 1,200 executives with responsibility for or influence over IT and data security from nine countries, and a range of industries. Job titles range from C-level executives to SVP/VP, IT Administrator, Security Analyst, Security Engineer, and Systems Administrator. Respondents represent a broad range of organizational sizes, with the majority ranging from 500 to 10,000 employees. The survey was conducted in November 2018.

The Federal Edition focuses on the findings from the 100 U.S. Federal Government respondents, providing comparisons and contrast to other U.S. vertical markets. For global roll-up findings and analysis, please see cpl.thalesgroup.com/dtr.

Key Findings

Key findings in the 2019 DTR Federal Edition include:

  • Federal Government agencies are playing digital transformation catch-up
  • Federal agencies may be approaching a security spend ceiling
  • Threat vectors for the Federal Government are broadening
  • Respondents believe they have adequate security (which may be a false sense of complacency)
  • Increasing complexity in Federal data environments is a top barrier to data security
  • Federal Government agencies are broadly adopting clouds for their sensitive data, but respondents have a number of cloud security concerns
  • Agencies are taking a multi-layered approach to security
  • Federal Government’s aspirational desires for data security may outstrip budget realities
  • Regulatory and compliance changes have introduced new challenges
  • Federal Government encryption rates are low

The report also examines Federal Government data security as it relates to:

  • The cloud
    • Software as a service
    • Infrastructure as a service
    • Platform as a service
  • Mobile payments
  • Internet of things
  • Big data
  • Containers/Docker
  • Blockchain

Related Articles

What is PSD2?

The European Union’s Revised Payment Services Directive (PSD2), was designed to push the financial services market in Europe in a safe and secure way by amending the ground rules for financial services providers. The directive requires all EU member states to include these new rules in their national laws and regulations.

How does PSD2 work?

Under PSD2, banks and other account-holding institutions in the EU are required to provide APIs for licensed external services providers (so-called Third-Party Providers, or TPPs). After obtaining their license, these TPPs can use the APIs to offer a range of payment and information services; from consumer apps that provide a one-stop overview of all your different bank accounts, to software that helps e-commerce websites facilitate direct payments.

Who can become a TPP?

The directive distinguishes between two types of TPPs: Account Information Service Providers (AISPs), which provide account information services, and Payment Initiation Service Providers (PISPs), which initiate payments. Different licences are issued to reflect the nature of the activity. Businesses can also obtain a TPP license, so that payment and information services can be taken in-house. Potential TPPs und PSD2 include:

  • Fintech companies
  • Big tech companies
  • Merchants
  • Banks
  • Insurance companies

Why was PSD2 created?

PSD2 was created to promote a more integrated and competitive financial services market in the EU while protecting and strengthening consumer rights. Traditionally, financial and payment services were mostly offered by banks and related institutions, leading to a relatively closed market. This directive has opened the market, allowing easier access for existing businesses as well as fintech companies who can provide agile, innovative payment services for consumers and businesses alike.

How did PSD2 come about?

The directive is nicknamed PSD2 because it is a follow-up of the original Payment Services Directive of 2007. It came into effect in January 2018, and all companies were required to become compliant with the national laws and regulations pertaining to PSD2 by September 2019. The original PSD provided a legal foundation to improve the ease, efficiency and security of cross-border payments within the EU. It was instrumental to the implementation of the Single European Payments Area (SEPA), lowered the barrier to entry for payment institutions, and offered consumers increased freedom of choice in the payment solutions they wished to use.

In 2013, the European Commission proposed a review of PSD due to innovations in the payment services market, which were unaccounted for in the existing regulations. The Commission also noted that the rules from the original directive tended to be applied differently across member states. PSD2 provides updated ground rules for new players on the payment services market while also updating the definitions of the regulations set out in PSD to smooth out any differences between the member states.

What does PSD2 mean?

PSD2 has opened up interesting opportunities for businesses: Integrated payment and information services (whether in-house or provided by an external TPP) improve the customer experience and provide access to a wealth of customer information and insights.

At the same time, PSD2 has brought a number of technical challenges for banks and TPPs. In most cases, IT infrastructure needed to be changed to facilitate TPP access. PSD2 also introduced strict security and authentication requirements that needed to be implemented across all access points.

PSD2 has presented unique opportunities and challenges depending on your business situation. 

What does PSD2 mean for Third Party Providers?

The two types of TPP licences reflect the activities that can be provided: services based on account information or payment initiation services. As long as TPPs comply with the security requirements under PSD2, the forms their services might take are near limitless. As such, PSD2 encourages TPPs to come up with innovative propositions that add real value. Some examples include:

  • Merchants taking the payments process in-house for a smoother customer experience
  • Apps that offer an overview of all your accounts across different banks
  • Insurance companies offering instant insurance cover based on recent purchases
  • Apps that help you save money based on your spending patterns
  • Banks offering quicker and more secure B2B loans

The beauty of this concept is that most TPPs are not subject to the same stringent regulatory burden as traditional banks and are typically not weighed down by the legacy IT infrastructure that constrains most banks. As a result, they can be much more innovative and adaptable, allowing them to meet market demand quickly and efficiently.

Banks and TPPs: A match made in heaven

When PSD2 first came into effect, its requirement to give third parties access to transaction data seemed like a loss for some banks. Yet, this requirement has actually proven to give banks the chance to become more competitive and improve customer relationships. By collaborating with innovative TPP partners or even taking things in-house and applying for their own TPP licence, banks have been able to offer all sorts of customer-focused services to stay one step ahead of the competition.

How TPPs can implement PSD2

Of course, TPPs also need to respect the ground rules laid out by PSD2. After all, consumers are giving them access to highly personal and sensitive information. That is why PSD2 set some strict security requirements in place. The directive focuses on two main areas:

  • Strong customer authentication
  • Secure communications

To comply with these requirements, TPPs have to build a sophisticated and adaptable infrastructure. A Customer Identity and Access Management (CIAM) platform offers a convenient solution, as it helps you implement things like strong customer authentication, fine-grained access control, and user analytics, while allowing you to connect with banks and other payment services.

What is eIDAS?

eIDAS is a EU regulation that establishes standards for electronic identities, authentication and signatures. The goal of the Regulation is to encourage the creation of a single European market for secure electronic commerce.

The eIDAS Regulation applies to government bodies and businesses that provide online services to European citizens, and that recognize or use identities, authentication, or signatures.

eIDAS requires that government and public commercial services recognize standard signature formats and pan-European identities. This applies to services associated with tax statements, insurance contracts, banking agreements, business-to-business electronic invoicing and pharmaceutical records. It also applies to commercial services that require a EU identity, for example, so-called “know your customer” services in banking. In addition, any trust services associated with these activities are regulated by eIDAS.

What is DEFCON 658?

The UK Ministry of Defence’s (MOD) DEFCON 658 aims to protect the defence supply chain from cyber threats and applies to organisations that are suppliers or wish to become suppliers to the MOD on contracts that handle MOD Identifiable Information (MODII).

DEFCON 658, which took effect in October 2017, is a procurement protocol on cybersecurity that requires all suppliers to Defence who bid for new contracts that necessitate the transfer of MODII to abide by DEFCON 658 and meet the standards mandated in DEFSTAN 05-138. Notably, adherence to DEFCON 658 extends to the supply chains (sub-contractors) of the suppliers themselves.

Where DEFCON 658 applies to all suppliers throughout the MOD supply chain where MODII is involved, organisations that do not adhere to its requirements will not be able to participate in MOD contracts.

Related Articles

What is the South Africa POPI Act?

South Africa’s Protection of Personal Information (POPI) Act aims to ensure that organisations operating in South Africa exercise proper care when collecting, storing or sharing personal data.

South Africa’s POPI Act, which became law on 11th April, 2014, requires organisations to adequately protect sensitive data or face large fines, civil law suits or even prison. The Act extends certain rights to data subjects that give them control over how their personal information can be collected, processed, stored and shared.

According to Chapter 11 (Offences, Penalties and Administrative Fines) of the POPI Act:

107. Any person convicted of an offence in terms of this Act, is liable, in the case of a contravention of–

(a) section 100, 103(1), 104(2), 105(1), 106(1), (3) or (4) to a fine or to imprisonment for period not exceeding 10 years, or to both a fine and such imprisonment; or

(b) section 59, 101, 102, 103(2) or 104(1), to a fine or to imprisonment for a period not exceeding 12 months, or to both a fine and such imprisonment.

According to Chapter 11, “a Magistrate’s Court has jurisdiction to impose any penalty provided for in section 107.”

Related Articles

What is Australia Privacy Amendment (Notifiable Data Breaches) Act 2017 Compliance?

Australia's Privacy Act establishes a mandatory requirement to notify the Privacy Commissioner and affected individuals of data breaches.

Regulation Summary

On February 13, 2017, the Australian Senate passed a bill establishing a mandatory requirement to notify the Privacy Commissioner and affected individuals of "eligible" data breaches. The Privacy Amendment (Notifiable Data Breaches) Act 2017 amends Australia's Privacy Act 1988.

Penalties

According to Global Legal Monitor:

A failure to notify that is found to constitute a serious interference with privacy under the Privacy Act 1988 can be penalized with a fine of up to ... AU$1.8 million ... (about ... US$1.37 million ...).

Compliance Summary

Section 26WG of The Act says breach notification is not necessary if “access or disclosure ... would not be likely to result in serious harm.” The section further states:

Access to, or disclosure of, information would not be likely [to result in serious harm] if a security technology or methodology:

...

(i) was used in relation to the information; and

(ii) was designed to make the information unintelligible or meaningless to persons who are not authorised to obtain the information

Related Articles

What is Japan’s My Number Compliance?

Japan’s Personal Information Protection Act (PIPA) requires protection of citizens’ personal data against leakage, loss, or damage; supervision of employees handling the data; and supervision of third parties entrusted with the data.

Regulation Summary

The data security requirements for businesses handling data associated with an individual’s Japanese “My Number” are governed primarily by Japan’s “Personal Information Protection Act (PIPA).”

These include:

  • Taking necessary and proper measures for the prevention of leakage, loss, or damage, and for other security control of personal data
  • Exercising necessary and appropriate supervision over the employees handling the data to ensure the security control of the personal data
  • Exercising necessary and appropriate supervision over any persons of organizations entrusted with the data to ensure the security control of the entrusted personal data

Related Articles

What is Monetary Authority of Singapore Guidance Compliance?

To safeguard sensitive customer data and comply with the Monetary Authority of Singapore’s Technology Risk Management guidelines, organizations need to apply consistent, robust and granular controls.

Regulation Overview

The Monetary Authority of Singapore (MAS) published Technology Risk Management (TRM) Guidelines to help financial firms establish sound technology risk management, strengthen system security, and safeguard sensitive data and transactions.

The TRM contains statements of industry best practices that financial institutions conducting business in Singapore are expected to adopt. The MAS makes clear that, while the TRM requirements are not legally binding, they will be a benchmark the MAS uses in assessing the risk of financial institutions (FI).

Guideline Descriptions

  • 8.4.4 The FI should encrypt backup tapes and disks, including USB disks, containing sensitive or confidential information before they are transported offsite for storage.
  • 9.1.6 Confidential information stored on IT systems, servers and databases should be encrypted and protected through strong access controls, bearing in mind the principle of “least privilege”.
  • 11.0.1.c Access control principle – The FI should only grant access rights and system privileges based on job responsibility and the necessity to have them to fulfill one's duties. The FI should check that no person by virtue of rank or position should have any intrinsic right to access confidential data, applications, system resources or facilities.
  • 11.1.1 The FI should only grant user access to IT systems and networks on a need-to-use basis and within the period when the access is required. The FI should ensure that the resource owner duly authorises and approves all requests to access IT resources.
  • 11.2 Privileged Access Management.
  • 11.2.3.d. Grant privileged access on a “need-to-have” basis.
  • 11.2.3.e. Maintain audit logging of system activities performed by privileged users.
  • 11.2.3.f. Disallow privileged users from accessing systems logs in which their activities are being captured.
  • 13 payment card security (automated teller machines, credit and debit cards).

Related Articles

What is Philippines Data Privacy Act of 2012 Compliance?

The Philippines Data Privacy Act adopts international principles and standards for personal data protection related to the processing of personal data across both government and the private sector.

Regulation Technical Security Requirements

Section 28 of the rules, entitled Guidelines for Technical Security Measures, offers the following direction:

Where appropriate, personal information controllers and personal information processors shall adopt and establish the following technical security measures:

a. A security policy with respect to the processing of personal data;

b. Safeguards to protect their computer network against accidental, unlawful or unauthorized usage, any interference which will affect data integrity or hinder the functioning or availability of the system, and unauthorized access through an electronic network;

...

d. Regular monitoring for security breaches, and a process both for identifying and accessing reasonably foreseeable vulnerabilities in their computer networks, and for taking preventive, corrective, and mitigating action against security incidents that can lead to a personal data breach;

...

g. Encryption of personal data during storage and while in transit, authentication process, and other technical security measures that control and limit access.

Related Articles

What is South Korea’s PIPA Compliance?

One of the strictest data protection regimes in the world, South Korea’s Personal Information Protection Act is supported by sector-specific legislation related to IT and communications networks (the) and the use of credit information (the Use and Protection of Credit Information Act).

Regulation Summary

Breach Notification: PIPA places many obligations on organizations in both the public and private sectors, including mandatory data breach notification to data subjects and other authorities including the Korean Communications Commission (KCC).

Data Security: PIPA imposes a duty on information managers (i.e. data controllers) to take the "technical, administrative and physical measures necessary for security safety ... to prevent personal information from loss, theft leakage, alteration or damage."

Official Policy Statement: Organizations are required to establish an official statement of those security measures.

Internal Privacy Officer: An internal privacy officer must be appointed (regardless of the size or nature of the organization) to oversee data processing activities. The internal privacy officer will be held accountable, and be subject to any criminal investigations following a breach.

Encryption for PII

Article 24(3) of PIPA places express restrictions on the management of unique identifying information, and requires information managers to take "necessary measures, ... including encryption," in order to prevent loss, theft, leakage, alteration or damage. Similarly, Articles 25(6) and 29 require "necessary measures" to be implemented to ensure that personal information may not be lost, stolen, altered or damaged.

Strict Enforcement

South Korea also has a track record of enforcement of data protection laws. Chapter 9 of PIPA contains severe sanctions for data security breaches including substantial fines and imprisonment – up to 50 million won in fines and imprisonment of up to five years are potential consequences.

Related Articles

What is New York State’s Cybersecurity Requirements for Financial Services Companies Compliance?

The New York State Cybersecurity Requirements for Financial Services Companies, or 23 NYCRR Part 500, took effect March 1, 2017. Covered entities “will be required to annually prepare and submit to the superintendent a Certification of Compliance with New York State Department of Financial Services Cybersecurity Regulations.” On March 1, 2019, the two year transitional period ends and Covered Entities are required to be in compliance with the requirements of 23 NYCRR 500.11.

Regulation Summary

New York State’s Department of Financial Services Cybersecurity Requirements for Financial Services Companies regulation:

Is designed to promote the protection of customer information as well as the information technology systems of regulated entities. This regulation requires each company to assess its specific risk profile and design a program that addresses its risks in a robust fashion. Senior management must take this issue seriously and be responsible for the organization’s cybersecurity program and file an annual certification confirming compliance with these regulations. A regulated entity’s cybersecurity program must ensure the safety and soundness of the institution and protect its customers.

It is critical for all regulated institutions that have not yet done so to move swiftly and urgently to adopt a cybersecurity program and for all regulated entities to be subject to minimum standards with respect to their programs. The number of cyber events has been steadily increasing and estimates of potential risk to our financial services industry are stark. Adoption of the program outlined in these regulations is a priority for New York State.1

We excerpt below specific Sections of 23 NYCRR Part 500 with which Thales can help your organization comply:

Section 500.06 Audit Trail

Each covered entity shall … include audit trails designed to detect and respond to Cybersecurity Events that have a reasonable likelihood of materially harming any material part of the normal operations of the Covered Entity.

Section 500.07 Access Privileges

As part of its cybersecurity program, based on the Covered Entity’s Risk Assessment each Covered Entity shall limit user access privileges to Information Systems that provide access to Nonpublic Information and shall periodically review such access privileges.

Section 500.08 Application Security

Each Covered Entity’s cybersecurity program shall include written procedures, guidelines and standards designed to ensure the use of secure development practices for in-house developed applications utilized by the Covered Entity, and procedures for evaluating, assessing or testing the security of externally developed applications utilized by the Covered Entity within the context of the Covered Entity’s technology environment.

Section 500.11 Third Party Service Provider Security Policy

Each Covered Entity shall implement written policies and procedures designed to ensure the security of Information Systems and Nonpublic Information that are accessible to, or held by, Third Party Service Providers.

Section 500.14 Training and Monitoring

As part of its cybersecurity program, each Covered Entity shall … implement risk-based policies, procedures and controls designed to monitor the activity of Authorized Users and detect unauthorized access or use of, or tampering with, Nonpublic Information by such Authorized Users….

Section 500.15 Encryption of Nonpublic Information

As part of its cybersecurity program, based on its Risk Assessment, each Covered Entity shall implement controls, including encryption, to protect Nonpublic Information held or transmitted by the Covered Entity both in transit over external networks and at rest.

Related Articles

1https://www.governor.ny.gov/sites/governor.ny.gov/files/atoms/files/Cybersecurity_Requirements_Financial_Services_23NYCRR500.pdf

What is FISMA Compliance?

FISMA assigns responsibility to various agencies to ensure the security of data in the federal government. It requires annual reviews of information security programs to keep risks below specified levels.

FISMA Requirements

According to TechTarget’s SearchSecurity website:

FISMA compliance requires program officials, and the head of each agency, to conduct annual reviews of information security programs, with the intent of keeping risks at or below specified acceptable levels in a cost-effective, timely and efficient manner. The National Institute of Standards and Technology (NIST) outlines nine steps toward compliance with FISMA:

  1. Categorize the information to be protected.
  2. Select minimum baseline controls.
  3. Refine controls using a risk assessment procedure.
  4. Document the controls in the system security plan.
  5. Implement security controls in appropriate information systems.
  6. Assess the effectiveness of the security controls once they have been implemented.
  7. Determine agency-level risk to the mission or business case.
  8. Authorize the information system for processing.
  9. Monitor the security controls on a continuous basis.

Related Articles

What is FIPS 199 and FIPS 200 Compliance?

FIPS Publication 200 is a mandatory federal standard developed by NIST in response to FISMA. To comply with the federal standard, organizations first determine the security category of their information system in accordance with FIPS Publication 199.

FIPS 199 and FIPS 200 Summary

According to NIST Special Publication 800-53, Revision 4:

FIPS Publication 200, Minimum Security Requirements for Federal Information and Information Systems, is a mandatory federal standard developed by NIST in response to FISMA. To comply with the federal standard, organizations first determine the security category of their information system in accordance with FIPS Publication 199, Standards for Security Categorization of Federal Information and Information Systems, derive the information system impact level from the security category in accordance with FIPS 200, and then apply the appropriately tailored set of baseline security controls in NIST Special Publication 800-53, Security and Privacy Controls for Federal Information Systems and Organizations.

Organizations have flexibility in applying the baseline security controls in accordance with the guidance provided in Special Publication 800-53. This allows organizations to tailor the relevant security control baseline so that it more closely aligns with their mission and business requirements and environments of operation.

FIPS 200 and NIST Special Publication 800-53, in combination, ensure that appropriate security requirements and security controls are applied to all federal information and information systems. An organizational assessment of risk validates the initial security control selection and determines if additional controls are needed to protect organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, other organizations, or the Nation. The resulting set of security controls establishes a level of security due diligence for the organization.

See FIPS 199 and FIPS 200 for more detail.

Related Articles

What is FIPS 140-2 Certification?

The Federal Information Processing Standard (FIPS) Publication 140-2 (FIPS PUB 140-2), commonly referred as FIPS 140-2, is a US government computer security standard used to validate cryptographic modules. FIPS 140-2 was created by the NIST and, per the FISMA, is mandatory for US and Canadian government procurements. Many global organizations are also mandated to meet this standard.

FIPS 140-2 Overview

According to FIPS Publication 140-2:

[It] provides a standard that will be used by Federal organizations when these organizations specify that cryptographic-based security systems are to be used to provide protection for sensitive or valuable data. Protection of a cryptographic module within a security system is necessary to maintain the confidentiality and integrity of the information protected by the module. This standard specifies the security requirements that will be satisfied by a cryptographic module.

… The security requirements cover areas related to the secure design and implementation of a cryptographic module. These areas include cryptographic module specification; cryptographic module ports and interfaces; roles, services, and authentication; finite state model; physical security; operational environment; cryptographic key management; electromagnetic interference/electromagnetic compatibility (EMI/EMC); self-tests; design assurance; and mitigation of other attacks.

Certification Authorities

The US NIST (National Institute of Standards and Technology) and Canadian CSE (Communications Security Establishment) jointly participate as certification authorities in the CMVP (Cryptographic Module Validation Program) to provide validation of cryptographic modules to the FIPS 140-2 standard.

Related Articles

What is NCUA Regulatory Compliance?

The National Credit Union Administration conducts audits of credit unions based on principles and standards outlined by the Federal Financial Institutions Examination Council (FFIEC). The FFIEC standards call for numerous security controls, including data access controls, encryption and key management and security monitoring.

Regulation

Access Rights Administration
According to FFIEC:

Financial institutions should have an effective process to administer access rights. The process should include:

  • Assigning users and devices only the access required to perform their required functions,
  • Updating access rights based on personnel or system changes,
  • Reviewing periodically users' access rights at an appropriate frequency based on the risk to the application or system ...

Encryption and Key Management
FFIEC also notes:

  • Encryption
    Financial institutions should employ an encryption strength sufficient to protect information from disclosure until such time as the information's disclosure poses no material threat. …. Decisions regarding what data to encrypt and at what points to encrypt the data are typically based on the risk of disclosure …. Encryption may also be used to protect data in storage. The implementation may encrypt a file, a directory, a volume, or a disk.
  • Encryption Key Management
    Since security is primarily based on the encryption keys, effective key management is crucial. Effective key management systems are based on an agreed set of standards, procedures, and secure methods that address Source: ISO 17799, 10.3.5.2

Security Monitoring
In addition, FFIEC offers guidelines for security monitoring.

Financial institutions should gain assurance of the adequacy of their risk mitigation strategy and implementation by:

  • Monitoring network and host activity to identify policy violations and anomalous behavior;
  • Monitoring host and network condition to identify unauthorized configuration and other conditions which increase the risk of intrusion or other security events;
  • Analyzing the results of monitoring to accurately and quickly identify, classify, escalate, report, and guide responses to security events; and
  • Responding to intrusions and other security events and weaknesses to appropriately mitigate the risk to the institution and its customers, and to restore the institution's systems.

Related Articles

What is Sarbanes-Oxley (SOX) Act Data-at-Rest Security Compliance?

Sections 302 and 304 of the Sarbanes-Oxley (SOX) Act set standards related to data protection, applying to US public companies and accounting firms.

Regulation

Sarbanes-Oxley Act: Section 404
Sarbanes-Oxley Act section 404 has two major compliance requirements:

  • Management is accountable for establishing and maintaining internal controls and procedures that enable accurate financial reporting and assessing this posture every fiscal year in an internal control report.
  • Public accounting firms that prepare or issue yearly audits must attest to, and report on, this yearly assessment by management.

Sarbanes-Oxley Act: Section 302
Sarbanes-Oxley Act section 302 expands this with compliance requirements to:

  • List all deficiencies in internal controls and information, as well as report any fraud involving internal employees.
  • Detail significant changes in internal controls, or factors that could have a negative impact on internal controls.

Implications
The SOX compliance requirement implications for public companies to protect data are:

  • Any financial information needs to be safeguarded and its integrity assured.
  • Specific internal security controls need to be identified that protect this data, auditing must take place, and this security posture re-assessed every year – including any changes or deficiencies as a result of changing conditions.

Related Articles

What is NAIC Insurance Data Security Model Law Compliance?

Adopted in the fourth quarter of 2017, the National Association of Insurance Commissioners (NAIC) Data Security Model Law (Model Law) requires insurers and other entities licensed by state insurance departments to develop, implement, and maintain an information security program; investigate any cybersecurity events; and notify the state insurance commissioner of such events.

States are working to introduce and pass this legislation now, and it is our understanding that the US Treasure Department will mandate the Model Law, if the States don’t adopt it within five years.

Regulation Summary

According to Section 2 of the act:

The purpose and intent of this Act is to establish standards for data security and standards for the investigation of and notification to the Commissioner of a Cybersecurity Event applicable to Licensees, as defined in Section 3.

Section 3 defines “Licensee” as follows:

“Licensee” means any Person licensed, authorized to operate, or registered, or required to be licensed, authorized, or registered pursuant to the insurance laws of this State ….

Section 3 also notes:

“Cybersecurity Event” means an event resulting in unauthorized access to, disruption or misuse of, an Information System or information stored on such Information System.

The term “Cybersecurity Event” does not include the unauthorized acquisition of Encrypted Nonpublic Information if the encryption, process or key is not also acquired, released or used without authorization.

We excerpt below specific Sections of The Model Law with which Thales can help your organization comply:

Section 4. Information Security Program

D. Risk Management

Based on its Risk Assessment, the Licensee shall:

(2) Determine which security measures listed below are appropriate and implement such security measures.

(a) Place access controls on Information Systems, including controls to authenticate and permit access only to Authorized Individuals to protect against the unauthorized acquisition of Nonpublic Information;

(d) Protect by encryption or other appropriate means, all Nonpublic Information while being transmitted over an external network and all Nonpublic Information stored on a laptop computer or other portable computing or storage device or media;

(e) Adopt secure development practices for in-house developed applications utilized by the Licensee …;

(g) Utilize effective controls, which may include Multi-Factor Authentication procedures for any individual accessing Nonpublic Information;

(i) Include audit trails within the Information Security Program designed to detect and respond to Cybersecurity Events …;

(k) Develop, implement, and maintain procedures for the secure disposal of Nonpublic Information in any format

Section 5. Investigation of Cybersecurity Event

If the Licensee learns that a Cybersecurity Event has or may have occurred the Licensee or an outside vendor and/or service provider designated to act on behalf of the Licensee, shall conduct a prompt investigation.

Related Articles

What is FedRAMP?

The Federal Risk and Authorization Management Program, or FedRAMP, is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.

FedRAMP Goals

According to FedRamp.Gov the goals of the program are:

  • Accelerate the adoption of secure cloud solutions through reuse of assessments and authorizations
  • Increase confidence in security of cloud solutions
  • Achieve consistent security authorizations using a baseline set of agreed upon standards to be used for cloud product approval in or outside of FedRAMP
  • Ensure consistent application of existing security practice
  • Increase confidence in security assessments
  • Increase automation and near real-time data for continuous monitoring

Key Processes

Also, according to FedRamp.Gov, FedRAMP authorizes cloud systems in a three-step process:

  • Security Assessment: The security assessment process uses a standardized set of requirements in accordance with FISMA using a baseline set of NIST 800-53 controls to grant security authorizations.
  • Leveraging and Authorization: Federal agencies view security authorization packages in the FedRAMP repository and leverage the security authorization packages to grant a security authorization at their own agency.
  • Ongoing Assessment & Authorization: Once an authorization is granted, ongoing assessment and authorization activities must be completed to maintain the security authorization.

Related Articles

What is GLBA Compliance?

Also known as the Financial Services Modernization Act, the Gramm Leach Bliley Act (GLBA) applies to U.S financial institutions and governs the secure handling of non-public personal information including financial records and other personal information.

Requirements

Section 501(b) of the Gramm-Leach-Bliley Act requires financial institutions to protect the security, confidentiality and integrity of non-public customer information through “administrative, technical and physical safeguards”. The Gramm-Leach-Bliley Act also requires each financial institution to implement a comprehensive written information security program that includes administrative, technical and physical safeguards appropriate to the size, complexity and scope of activities of the institution. These include:

  • Ensuring the security and confidentiality of customer records and information
  • Protecting against any anticipated threats or hazards to the security or integrity of such records
  • Protecting against unauthorized access to or use of such records or information, which could result in substantial harm or inconvenience to any customer

Implications

For organizations affected by the standard, these Gramm-Leach-Bliley privacy regulations, combined with referenced requirements under the Federal Deposit Insurance Act – section 36, result in the need to:

  • Safeguard and monitor customer records and information
  • Create and maintain effective risk assessments
  • Identify, implement and audit specific internal security controls that protect this data

Related Articles

What is HIPAA HITECH?

The US Health Insurance Portability and Accountability Act (HIPAA)

The HIPAA Security Rule requires covered entities to implement technical safeguards to protect all electronic protected healthcare information (ePHI), making specific reference to encryption, access controls, encryption key management, risk management, auditing and monitoring of ePHI information. The HIPAA Security Rule enumerates examples of encryption methods that covered entities can employ, along with the factors to consider when implementing a HIPAA encryption strategy.

Health Information Technology for Economic and Clinical Health (HITECH) Act

Enacted as a part of the American Recovery and Reinvestment Act (ARRA) of 2009, the HITECH Act expands the HIPAA encryption compliance requirement set, requiring the disclosure of data breaches of “unprotected” (unencrypted) personal health records, including those by business associates, vendors and related entities.

HIPAA Omnibus Rule of 2013

The “HIPAA Omnibus Rule” of 2013 formally holds business associates liable for compliance with the HIPAA Security Rule.

Related Articles

What is FDA/DEA EPCS Compliance?

EPCS revises DEA’s regulations to provide practitioners with the option of writing prescriptions for controlled substances electronically as well as receiving, dispensing and archiving electronic prescriptions. The electronic prescription application must incorporate a secure process for practitioner authentication.

The DEA's EPCS Regulation

"Electronic Prescriptions for Controlled Substances" revises DEA's regulations to provide practitioners with the option of writing prescriptions for controlled substances electronically. The regulations will also permit pharmacies to receive, dispense, and archive electronic prescriptions.

The DEA’s requirements for EPCS include:

(16) The digital signature functionality must meet the following requirements:

(i) The cryptographic module used to digitally sign the data elements required by part 1306 of this chapter must be at least FIPS 140–2 Security Level 1 validated. FIPS 140–2 is incorporated by reference in Section 1311.08.

....

(iii) The electronic prescription application's private key must be stored encrypted on a FIPS 140–2 Security Level 1 or higher validated cryptographic module using a FIPS-approved encryption algorithm. FIPS 140–2 is incorporated by reference in Section 1311.08.

In addition, in “§1311.205 Pharmacy application requirements” in the same DEA publication, the section states:

(b) The pharmacy application must meet the following requirements:

(4) For pharmacy applications that digitally sign prescription records upon receipt, the digital signature functionality must meet the following requirements:

(i) The cryptographic module used to digitally sign the data elements required by part 1306 of this chapter must be at least FIPS 140–2 Security Level 1 validated. FIPS 140–2 is incorporated by reference in Section 1311.08.

....

(iii) The pharmacy application's private key must be stored encrypted on a FIPS 140–2 Security Level 1 or higher validated cryptographic module using a FIPS-approved encryption algorithm. FIPS 140–2 is incorporated by reference in Section 1311.08.

Related Articles

What is NIST 800-53, Revision 4?

According to NIST Special Publication 800-53, Revision 4:

[It] provides a catalog of security and privacy controls for federal information systems and organizations and a process for selecting controls to protect organizational operations … , organizational assets, individuals, other organizations, and the Nation from a diverse set of threats ….

The controls are customizable and implemented as part of an organization-wide process that manages information security and privacy risk. The controls address a diverse set of security and privacy requirements across the federal government and critical infrastructure, derived from legislation, Executive Orders, policies, directives, regulations, standards, and/or mission/business needs.

The [NIST 800-53, Revision 4] publication also describes how to develop specialized sets of controls, or overlays, tailored for specific types of missions/business functions, technologies, or environments of operation.

Finally, the catalog of security controls addresses security from both a functionality perspective (the strength of security functions and mechanisms provided) and an assurance perspective (the measures of confidence in the implemented security capability). Addressing both security functionality and security assurance ensures that information technology products and the information systems built from those products using sound systems and security engineering principles are sufficiently trustworthy.

Related Articles

What is GDPR?

Perhaps the most comprehensive data privacy standard to date, GDPR affects any organization that processes the personal data of EU citizens -- regardless of where the organization is headquartered.

GDPR Overview

The GDPR is designed to improve personal data protections and increase organizational accountability for data breaches. Fines for non-compliance can reach four percent of global revenues or 20 million EUR (whichever is higher). No matter where your organization is located, if it processes or controls the personal data of EU residents, you need to be aware and prepared.

Specific Requirements

The GDPR includes numerous requirements for compliance. To see them all, refer to the actual regulation.

Following are key provisions of the GDPR with which Thales can help you comply:

  • Implement technical and organizational measures to ensure data security appropriate to the level of risk, including “pseudonymisation and encryption of personal data." (Article 32)
  • Have in place "a process for regularly testing, assessing and evaluating the effectiveness of technical and organizational measures for ensuring the security of the processing." (Article 32)
  • Communicate “without undue delay” personal data breaches to the subjects of such breaches "when the breach is likely to result in a high risk to the rights and freedoms" of these individuals. (Article 34)
  • Safeguard against the "unauthorized disclosure of, or access to, personal data." (Article 32)

Related Articles

What is PCI-DSS?

According to the PCI Security Standards Council, PCI DSS is a “framework for a robust payment card data security process.”

Any organization that plays a role in processing credit and debit card payments must comply with the strict PCI DSS compliance requirements for the processing, storage and transmission of account data.

Over 200 Tests against Six Core Principles

The PCI DSS standard involves assessment against over 200 tests that fall into 12 general security areas representing six core principles. These PCI DSS tests span a wide variety of common security practices along with technologies such as encryption, key management, and other data protection techniques.

Risks Associated with PCI DSS Auditing and Compliance

  • Failure to comply with PCI DSS compliance requirements can result in fines, increased fees, or even the termination of your ability to process payment card transactions.
  • Complying with the PCI DSS cannot be considered in isolation; organizations are subject to multiple security mandates and data breach disclosure laws or regulations. On the other hand, PCI compliance projects can easily be side-tracked by broader enterprise security initiatives.
  • Guidance and recommendations linked to PCI DSS requirements include common practices that are likely to be already in place. However, some aspects, specifically those associated with encryption, might be new to the organization and implementations can be disruptive, negatively impacting operational efficiency if not designed correctly.
  • It is all too easy to end up with a fragmented approach to security based on multiple proprietary vendor solutions and inadequate technologies that are expensive and complex to operate.
  • Opportunities exist to reduce the scope of PCI DSS compliance obligations and therefore reduce cost and impact; however, organizations can waste time and money if they do not exercise care to ensure that new systems and processes will in fact be accepted as PCI DSS compliant.

Related Articles

What are “Common Criteria”?

The Common Criteria for Information Technology Security Evaluation (abbreviated as Common Criteria or CC) is an international standard (ISO/IEC 15408) for computer security certification. Common Criteria provides assurance that IT security products have been specified and evaluated in a rigorous and repeatable manner and at a level commensurate with the target environment for use. Originally developed to unify and supersede national IT security certification schemes from several different countries, including the US, Canada, Germany, the UK, France, Australia and New Zealand. Common Criteria is now the widest available mutual recognition of secure IT products.

Security Standard

Common Criteria certified solutions are required by governments and enterprises around the world to protect their mission-critical infrastructures. Common Criteria is often a pre-requisite for qualified digital signatures under the European Union digital signature laws. In addition, U.S. Government agencies frequently request products that are National Information Assurance Partnership (NIAP) listed, which requires Common Criteria certification.

The Common Criteria standard provides an assurance on different aspect of the product security covering areas such as:

  • Development of the product and related functional specification, high-level design, security architecture and or implementation design
  • Guidance of the product and related manual for the secure deployment and preparation of the product
  • Life-cycle of the document and all related processes applicable during the creation of the product such as configuration management or secure development process and tools used to the deployment and retirement of the product with the life-cycle design and delivery process
  • Supporting security policy documentation
  • Tests of the product and particularly coverage of the functional security requirement
  • Vulnerability assessments

Certification Authorities

Common Criteria is an international standard (ISO/IEC 15408). The Common Criteria Development Board managed the technical work program for the maintenance and ongoing development of the CC set of documentation.

Two major recognition agreements exist in the Common Criteria:

  1. Common Criteria Recognition Arrangement (or CCRA) that comprises 28 countries across all continents, and recognizing the Common Criteria certification up to the level EAL 2 of secure IT products by the CCRA authorizing members
  2. Senior Official Group – Information Systems Security (or SOG-IS) that comprises 15 countries from Europe, and recognizing the certification Common Criteria up to the level EAL 7 of secure IT products depending on the level of the SOG-IS members

Related Articles

What are Data Breach Notification Requirements?

Data breach disclosure law notification requirements following loss of personal information have been enacted by governments around the globe. They vary by jurisdiction, but almost universally include a “safe harbour” clause, which means that if the stolen data is undecipherable and meaningless to whomever steals it, the breached organization does not need to report the breach. Consequently, data-centric protection, such as encryption, is considered best practice, because it renders data meaningless without the keys to decrypt or detokenize it.

Data Breach Disclosure Laws Widespread

National data breach disclosure laws include the UK Data Protection ActEU General Data Protection Regulation (GDPR)South Korea’s Personal Information Protection ActAustralian Privacy Act and others.

Prevention of Data Breaches a Complex Task

Data breach protection and prevention is not as simple as implementing hardware level disk encryption or OS level encryption within systems. Attacks are increasingly able to penetrate perimeter defenses, compromise accounts, and mine data without targets even being aware of the attack. With this kind of activity, simple encryption schemes won’t prevent a data breach – attackers will access accounts that allow them to decrypt and extract personal data. Driving this are criminal groups willing and able to pay for stolen personal information that has direct monetary value.

Data-Centric Focus

A data-centric security strategy for complying with data breach disclosure laws requires:

  1. Encryption of personal data wherever it resides – including file systems databases, web repositories, cloud environments, big data environments and virtualization implementations.
  2. Policy-based access controls to assure that only authorized accounts and processes can see the data.
  3. Monitoring of authorized accounts accessing data, to ensure that these accounts have not been compromised.

Related Articles

What is Data Residency?

There are more than 100 national data privacy laws on the books. Global enterprise, SaaS vendors and cloud-solution providers need to be aware how to meet data residency requirements in their environment.

Though there is a wide variation between requirements, meeting this single rule ensures that your organization remains in compliance:

  • All customer and employee data must not be accessible to those outside of their home legal jurisdiction
  • Exception: When explicit consent is given on a per usage basis

The solution to this challenge is to encrypt all data-at-rest and only allow access to data-at-rest from the jurisdiction where it originates.

Related Articles

What is ISO 27799:2016?

ISO 27799 is an international standard providing guidance on how best to protect the confidentiality, integrity and availability of personal health data for anyone working in the health sector or its unique operating environments.

Regulation Summary

Among the best practices called for in ISO 27799 are:

  • Data access controls, including management of privileged access
  • Cryptographic control of sensitive data
  • Management and protection of encryption keys
  • Recording and archiving “all significant events concerning the use and management of user identities and secret authentication information” and protecting those records from “tampering and unauthorized access.”1

Related Articles

1ISO/IEC 27002, Second edition 2013-10-01: Information technology — Security techniques — Code of practice for information security controls. https://www.iso.org/standard/54533.html

What is PCI HSM?

The PCI HSM specification defines a set of logical and physical security compliance standards for HSMs specifically for the payments industry. Compliance certification depends on meeting those standards.

Certification Objectives

HSMs play a critical role in securing payment transactions, so it is essential that the HSMs themselves are kept secure throughout their lifecycle—from manufacturing and shipment to operation and decommissioning. The PCI HSM compliance certification standard provides HSM vendors with a strict set of security requirements and a rigorous process for having platforms assessed against these requirements.

Scope

PCI HSM compliance certification is increasingly becoming a fundamental requirement for various payment processes, including PIN processing, card verification, card production, ATM interchange, cash-card reloading and key generation.

Hardware

To be PCI HSM compliant, a platform must address the following physical security requirements:

  • Tamper-detection and response mechanisms
  • Resilience to abnormal environmental and operating conditions
  • Protection of sensitive data within the device
  • Preventing disclosure of sensitive information by external monitoring techniques
  • Protection of cryptographic keys inside the device, even if the security boundary is breached

Software and Settings

HSM software, configuration and management must address the following logical security requirements:

  • Resilience against unexpected command sequences or operating modes
  • Secure firmware management
  • Strong authentication prior to running sensitive services
  • Secure key management and key separation to prevent misuse and eliminate cleartext exposure of sensitive data and PINs
  • Secure audit trail

Supply Chain

The HSM vendor is required to provide evidence to the PCI HSM evaluation team that effective processes are in place to ensure that the HSM is secured at all times, from the time of manufacture to packaging and shipment to the end user.

Related Articles

What is SWIFT CSC?

SWIFT, the Society for Worldwide Interbank Financial Telecommunications, is a messaging network that financial institutions use to securely transmit information and instructions through a standardized system of codes.1

SWIFT Customer Security Controls (CSC) Framework

According to SWIFT:

The SWIFT Customer Security Controls Framework describes a set of mandatory and advisory security controls for SWIFT customers.

Mandatory security controls establish a security baseline for the entire communit, and must be implemented by all users on their local SWIFT infrastructure. SWIFT has chosen to prioritise these mandatory controls to set a realistic goal for near-term, tangible security gain and risk reduction.

Advisory controls are based on good practice that SWIFT recommends users to implement. Over time, mandatory controls may change due to the evolving threat landscape, and some advisory controls may become mandatory.

All controls are articulated around three overarching objectives:

  • 'Secure your Environment',
  • 'Know and Limit Access'
  • 'Detect and Respond'

The controls have been developed based on SWIFT's analysis of cyber threat intelligence and in conjunction with industry experts and user feedback. The control definitions are also intended to be in line with existing information security industry standards.2

Related Articles

1https://www.investopedia.com/terms/s/swift.asp
2https://www.swift.com/myswift/customer-security-programme-csp/security-controls

What is ISO/IEC 27002:2013?

ISO/IEC 27002 is an international standard used as a reference for controls when implementing an Information Security Management System, incorporating data access controls, cryptographic control of sensitive data and key management.

Regulation Summary

Among the best practices called for in ISO/IEC 27002 are:

  • Data access controls
  • Cryptographic control of sensitive data
  • Management and protection of encryption keys
  • Recording and archiving “all significant events concerning the use and management of user identities and secret authentication information” and protecting those records from “tampering and unauthorized access.”

Related Articles

What is Network Encryption?

Network encryption protects data moving over communications networks. The SSL (secure sockets layer) standard (the technology behind the padlock symbol in the browser and more properly referred to as transport layer security [TLS]) is the default form of network data protection for Internet communications that provides customers with peace of mind through its familiar icon. Many security-conscious companies go one stage further and protect not only their Internet traffic but also their internal networks, corporate backbone networks, and virtual private networks (VPNs) with network level encryption.

As with any low-level security technique however, network-level data encryption is a fairly blunt instrument. The network is almost completely blind to the value of the data flowing over it and lacking this context is usually configured to protect either everything or nothing. And even when the “protect everything” approach is taken, a potential attacker can glean valuable information from network traffic patterns.

Encrypting data as it moves over a network is only part of a comprehensive network data encryption strategy. Organizations must also consider risks to information at its origin — before it moves — and at its final destination. Stealing a car in a parking lot or private garage is much easier than on the freeway while traveling at high speed!

Related Articles

Thales High Speed Encryption solutions enable you to encrypt everywhere -- from network traffic between data centers and the headquarters to backup and disaster recovery sites, whether on premises or in the cloud. Learn more at the following links:

What is Transparent Encryption?

While the meaning of “transparent” may differ from provider to provider, CipherTrust Transparent Encryption provides continuous file-level encryption that protects against unauthorized access by users and processes in physical, virtual, and cloud environments. The implementation is seamless and transparent to your applications/databases and storage, and so it can work across an enterprise’s entire environment, keeping both business and operational processes working without changes even during deployment and roll out.

CipherTrust Transparent Encryption is a Thales product.

Compliance

Encryption is a recommended best practice for almost all compliance and data privacy standards and mandates, including PCI DSS, HIPAA/Hitech, GDPR, and many others.

Related Articles

Secure your data at rest, comply with regulatory and industry standards and protect your organization’s reputation. Learn how Thales can help:

What is End-to-End Encryption?

In end-to-end encryption, data is protected by default wherever it goes over its entire lifecycle. Sensitive data is encrypted the moment it is captured, in a point-of-sale (POS) device at a retail store, for example, and stays encrypted or is re-encrypted while it moves between systems and security domains. This notion of encryption as a data “bodyguard” that always accompanies data objects (files, documents, records, and so on) is appealing but raises questions about establishing trust relationships between different domains and interoperability when it comes to key management.

Related Articles

Secure your data at rest, comply with regulatory and industry standards and protect your organization’s reputation. Learn how Thales can help:

What is Point-to-Point Encryption?

Point-to-point encryption (P2PE) is an encryption standard established by the Payment Card Industry Security Standards Council. It stipulates that cardholder information is encrypted immediately after the card is used with the merchant’s point-of-sale terminal and isn’t decrypted until it has been processed by the payment processor. If the P2PE process is implemented correctly, with account data being encrypted within an approved, secure cryptographic device (SCD), such as a POS terminal, and not decrypted at all within the merchant environment, there is potential for the merchant to be taken almost completely out of scope for PCI DSS.

For P2PE to work as intended, strict controls for protection of, and access to, decryption keys must be in place. The current guidance requires the use of hardware security modules (HSMs) with an appropriate security rating to protect access to those keys. Acquirers and other players in the payments chain have already begun to market value-added services that exploit P2PE to reduce compliance costs for their merchants. From a PCI DSS perspective, any system that has the capacity to decrypt account data comes into scope immediately, so the ability to insulate merchants by protecting keys within HSMs can have significant benefits for all concerned.

Related Articles

Secure your digital assets, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help at the following links:

What is Application Layer Encryption?

Application layer encryption is a data-security solution that encrypts nearly any type of data passing through an application. When encryption occurs at this level, data is encrypted across multiple (including disk, file, and database) layers. This application layer encryption approach increases security by reducing the number of potential attack vectors. Another advantage to application encryption is that, since it encrypts specific fields at the application layer, organizations can secure sensitive data before storing it in database, big data, or cloud environments.

Related Articles

Secure your data at rest, comply with regulatory and industry standards and protect your organization’s reputation. Learn how Thales can help:

What is Tokenization?

Tokenization protects sensitive data by substituting non-sensitive data. Tokenization creates an unrecognizable tokenized form of the data that maintains the format of the source data. For example, a credit card number (1234-5678-1234-5678) when tokenized (2754-7529-6654-1987) looks similar to the original number and can be used in many operations that call for data in that format without the risk of linking it to the cardholder’s personal information. The tokenized data can also be stored in the same size and format as the original data. So, storing the tokenized data requires no changes in database schema or process.

Data tokenization allows you to maintain control and compliance when moving to the cloud, big data, and outsourced environments.

If the type of data being stored does not have this kind of structure – for example text files, PDFs, MP3s, etc., tokenization is not an appropriate form of pseudonymization. Instead, file-system level encryption would be appropriate. It would change the original block of data into an encrypted version of the data.

Related Articles

Secure your data at rest, comply with regulatory and industry standards and protect your organization’s reputation. Learn how Thales can help:

What is Dynamic Masking?

Dynamic Data Masking is a technology that protects data by dynamically masking parts of a data field. For example, a security team could establish policies so that a user with customer service representative credentials would only receive a credit card number with the last four digits visible, while a customer service supervisor could access the full credit card number in the clear. This functionality makes tokenization with dynamic masking particularly useful for PCI DSS compliance.

Related Articles

Secure your digital assets, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help at the following links:

What is Data at Rest?

When data collects in one place, it is called data at rest. For a hacker, this data at rest — data in databases, file systems, big data lakes, the cloud, and storage infrastructure in general — is probably much more attractive than the individual data packets crossing the network. Data at rest in these environments tends to have a logical structure, meaningful file names, or other clues which betray that this location is where the “money” is — that is, credit cards, intellectual property, personal information, healthcare information, financial information, and so on.

Of course, even data “at rest” actually moves around. For a host of operational reasons, data is replicated and manipulated in virtualized storage environments and frequently “rests” on portable media. Backup tapes are transferred to off-site storage facilities and laptops are taken home or on business trips all of which increases risk.

Breaches of sensitive data at rest often result in mandated public disclosure of the breach, reductions in sales and share price, and serious damage to the organization’s reputation.

Government regulations and industry associations generally mandate protecting personally identifiable information (PII); protected health information (PHI); and financial information, including credit card and financial account numbers; through pseudonymization techniques, such as encryption or tokenization, and tight control of access to the data through user access management. These techniques are also appropriate for protecting data the organization does not wish to share for its own reasons, such as intellectual property (IP).

In most regulations, if an organization’s data is breached, but it is encrypted and the encryption keys have not been stolen with the data, then the organization does not have to report the breach, because the data is indecipherable and useless to whomever stole it, and no harm is deemed to have come to the person identified with the data.

Related Articles

Secure your digital assets, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help at the following links:

What is Full-Disk Encryption (FDE) and What are Self-Encrypting Drives (SED)?

Full-disk encryption (FDE) and self-encrypting drives (SED) encrypt data as it is written to the disk and decrypt data as it is read off the disk. FDE makes sense for laptops, which are highly susceptible to loss or theft. But FDE isn’t suitable for the most common risks faced in data center and cloud environments.

The advantages of FDE/SED include:

  • Simplest method of deploying encryption
  • Transparent to applications, databases, and users.
  • High-performance, hardware-based encryption

The limitations of full-disk encryption/self-encrypting drives (FDE/SED) include:

  • Addresses a very limited set of threats (protects only from physical loss of storage media)
  • Lacks safeguards against advanced persistent threats (APTs), malicious insiders, or external attackers
  • Meets minimal compliance requirements
  • Doesn’t offer granular access audit logs

Related Articles

Secure your digital assets, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help at the following links:

What is data center interconnect (DCI) layer 2 encryption?

Layer 2 is the data-link layer specified by the Open Systems Interconnection (OSI) model, which standardizes the functions of telecommunications and computing systems around the world1. Layer 2 encryption secures information at the data-link level as it is transmitted between two points within a network2 .

Challenges with Unencrypted Network Data

  • Cybercriminals can “eavesdrop” on unencrypted data traveling over a network. This compromises privacy and makes it possible for these criminals to modify or substitute data to stage more sophisticated attacks.
  • Many industry mandates require protection for data in motion, and organizations that do not implement this protection risk fines and being required to disclose data breaches.
  • Depending on the application, encryption capabilities embedded in routers and switches may not offer the combination of security and performance enterprises need.

Advantages of Layer 2 Encryption

Layer 2 encryption protects data in transit, so it is useful when the transmission line is not secure. But, because the message is decrypted at each host in the transmission path, Layer 2 encryption is best suited for systems in which every transmission host is secure.3

Related Articles

Secure your digital assets, comply with regulatory and industry standards, and protect your organization’s reputation. Learn how Thales can help at the following links:

1 https://www.wideband.net.au/blog/difference-layer-3-layer-2-networks/

2 http://searchsecurity.techtarget.com/definition/link-encryption

3 Ibid

Why do we need the Zero Trust security model now?

The world has changed and we have changed our interactions with each other and with devices. The enterprise no longer has control over a closed network.

The COVID-19 pandemic has shifted people from working in offices to working remotely, businesses have increased their use of cloud platforms supporting a variety of devices and networks, and bad actors are taking advantage of the upheaval to significantly increase account infiltrations.

Legacy security solutions cannot support a zero trust network. Legacy security is limited in its ability to address cloud security because legacy security relies on a closed perimeter security model that assumes all applications are delivered from the same network location and all users are accessing those applications from the same Enterprise entry point.

When we have Zero Trust security in place, we can provide security to anywhere and everywhere on whatever device our colleagues choose to use. We can strengthen security further by including access management as the core of the Zero Trust architecture to create a Zero Trust extended ecosystem. Zero Trust architecture built on access management checks at the device level and again at the application level regardless of the device, network and how much the user hops from application to application.

We don’t know where the threats are going to stop, so Zero trust technology is available with different levels of security to give you choices in the level of risk you accept in your Zero trust strategy.

What is Zero Trust security?

The Zero Trust security model moves verification to the access point and trusts no person, device or entity.

When Zero Trust architecture is established with access management at the core of the Zero Trust policy, to create a Zero Trust extended ecosystem, you gain continuous authentication which allows you to check a user’s identity as they hop from application to application.

In contrast to the Zero Trust approach, the VPN model verifies the user once at the entry point to the VPN and uses a static approach to security that gives the user access to everything within the boundaries of the VPN until the user leaves the VPN.

Businesses are experiencing a volatile, uncertain, complex, and ambiguous world and the zero trust model of cybersecurity enables businesses to extend security to continue to do business safely despite the chaos around them.

Why do you need Zero Trust security?

Data breaches are expensive in terms of reputation, repeat business and government penalties. Implementing a Zero Trust network reduces your risk of a data breach and secures apps and data in the cloud.

When users access business data and / or services, they expose their company to threats – even if the users are operating within a VPN onsite. A Zero Trust security framework minimizes vulnerabilities within a corporate network by limiting the user’s access to only the segments approved for the user – not the entire contents of the VPN.

At the same time, applications are being delivered from the cloud, outside of the traditional enterprise data center, and the majority of users are working remotely outside of an established security perimeter in an office building.

The Corona Virus pandemic has been extremely disruptive and users are likely to be distracted and less security-focused. Zero Trust security provides a safety net to maintain a high level of security without relying on a physical location to authenticate access to the applications and databases a user is authorized to access.

Users want to access everything from anywhere on whatever device they choose. Users are focused on convenience, not the need to meet regulations such as GDPR, CCPA, PCI DSS and HIPAA – but the company must meet regulations or face stiff penalties. A Zero Trust framework enables users to do their jobs and reduces risk for the enterprise.

What role does authentication and access management play in zero trust security?

Zero Trust security is widely accepted as being a security model based on the principle of ‘trust no one, verify everywhere’ – i.e., no entity can be trusted. When applications are being delivered from multiple clouds and delivery points – authentication plays a key role because the access point becomes the front line of security. The access point is the entryway for a user to access enterprise information / applications.

Authentication is commonly executed with a user name and password. However, a user name and password combination is easily breached.

Multi-factor Authentication (MFA) is a common factor in a Zero Trust policy to validate a user’s identity before allowing access to enterprise information / applications.

To increase the security of the Zero Trust network, Thales adds access management to test the user’s identity against a larger set of adpative and contextual attributes.

What are the key concepts of Zero Trust security?

To achieve its goal, Zero Trust access is governed by the following foundational principles:

  • Access to corporate resources is determined by a dynamic policy, enforced on a per-session basis, and updated based on information collected about the current state of client identity, application/service, and the requesting asset, including other behavioural and environmental attributes.
  • All communications to resources must be authenticated, authorized, and encrypted.
  • Authentication and authorization are agnostic to the underlying network
  • The enterprise monitors and measures the integrity and security posture of all owned and associated assets