What Is Data Tokenization?
Data Tokenization, when applied to data security, is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token, that has no extrinsic or exploitable meaning or value. -- Wikipedia
Tokenization provides anonymization – the process in which sensitive/protected information are eliminated or manipulated with the purpose of hindering the possibility of reverting back to the original data. This involves removing all identifying data in order to create unlink-able data. Tokenized data is not mathematically reversible. Tokenization is Data Security.
Storing tokens reduces the amount of sensitive data in your environment and helps your business meet many types of privacy and data compliance requirements.
Tokenization replaces sensitive data on your systems with a unique set of numbers and letters that have no bearing on the original data. "No bearing" means the data being stored is not used to calculate the token value.
Thieves can't steal --- what isn't there!
Data Tokenization is the most secure method of storing sensitive data. It provides both physical and logical separation of data elements. You store tokens in your systems, the original data is safely encrypted and stored in the AuricVault® servers. You cannot programmatically or mathematically determine the original data from the token itself. You must use the token, and your credentials, to retrieve the original data from the AuricVault® service. This physical and logical separation of payment and privacy information significantly reduces a company’s exposure to information theft and reduces the impact of any security breach.
Data detokenization is the reverse of the tokenization process. The original data is retrieved using the token and your service credentials. Bulk retrieval of the original data from a tokenized state ought to be easily accomplished. It is simple with the AuricVault® service.