Keeping Azets Cozone secure is a continuous activity. This includes staying updated about, and adapting to, the latest security threats, and also ensuring that all systems surrounding the application are updated with the latest patches.

Scope

 

This documentation covers information security measures of the Azets Cozone Portal, the Azets Employee mobile application, Azets Drive, Azets Employee, Azets Activity, and Azets Agreement. For our customers’ convenience, Cozone provides access to other applications as well. For security information on other applications accessible via Cozone Portal, please contact your respective application service provider.

 

Access Management

The users' creation/deletion and the access granting/revoking are done at the customer’s request by Azets consultants according to their Agreement.


 Authentication and Encryption

  1. All data communication is done through Transport Layer Security 1.2 (TLS).
  1. Encrypted communications: We use SHA256 encryption with 2048- bit public RSA-keys. All data communication to and from the users computers is encrypted with TLS, the most used Internet standard for cipher communication.
  2. Encryption in transit and at rest: All databases, files and applications are encrypted at rest by AWS with AES-256 encryption algorithm. 
  3. Authentication: We have flexible authentication options for our customers:
  • External IdP: Customers’ employees can access Azets Cozone by authenticating with their internal identity provider and login credentials. Hence, they won't need a specific username and password for Azets Cozone. Today we support federation services that are based on SAML 2.0 such as ADFS, Okta, OneLogin and Shibboleth. 
  • Local Authentication: This is our default service for authentication. This requires users to have a username and password. The password must contain a minimum of 10 characters. We also use an advanced algorithm to prohibit usage of common dictionary words, commonly used passwords, domain related words, and/or serials (such as 1234 or qwerty) to make the password as strong as possible. Accounts are locked after 5 failed login attempts, and the user will receive an email to reset their password to be able to login again. We offer 2-step authentication with SMS, mobile applications (e.g. Google Authenticator) or one time backup codes – This can be setup by the user in their profile settings. It’s also possible to set 2-step authentication as mandatory for all employees in a specific company or for critical user roles.  
  • Automated log out: To avoid unauthorized access to information if a computer is left unattended, the system will automatically logout a user after 20 minutes of inactivity, 60 minutes with extended session length. Also when a user password is changed all active user sessions are closed and the user needs to login again.
  • Continuous verification of user:  Every call to our servers involves a control of the logged-in users’ access rights.

Local Authentication

 

Cryptographic tools, including bcrypt and other password hashing functions are in use through technology components embedded in the architectural framework. Using the components as part of the architectural framework is the best security practice because it provides a range of tested, reliable cryptographic functions used centrally in the software, which reduces the risk of security vulnerabilities. Following are the details

  • Validation components are used for password strength. The components provide a foolproof password strength algorithm & given certain assumptions. This is done by checking the password against a variety of factors, such as the use of common passwords, phrases, and patterns (like "123456") and many other factors of such nature
  • Password hashing functions are in use based on the best practice (Blowfish cipher). When a password is stored using bcrypt, it's actually the hashed version of the password that's stored. This means that in an event of compromise, the passwords are not usable without being able to reverse-engineered (which is computationally difficult and time-consuming).
  • Cryptographic hash function for password protection is also being exercised along with bcrypt.  This validation provides comparison for hashing non-sensitive data

 

Password policy: 

  • It is at least 10 characters long
  • Has at least 1 uppercase
  • Has at least 1 lowercase
  • Has at least 1 number

 

In addition, some passwords may be rejected if they contain common words and phrases that, even though they meet all other criteria, would be weak. 

 

Accounts are locked after 5 failed login attempts. 


 MFA: We offer 2-step authentication with SMS, mobile applications (e.g. Google Authenticator, Microsoft Authenticator), or one-time backup codes – This can be set up by the user in their profile settings. It’s also possible to set 2-step authentication as mandatory for all employees in a specific company.


 Password reset: The users can request a password reset from the login page by submitting a username. A password-reset URL is then sent to the email address registered on that username. When the user clicks on the provided link it is requested to select a new password for Cozone. If a user has forgotten the username, support must be contacted. We do not give out any passwords via our service desk; the user is referred to the customer’s administrator. If users do not have a registered email or somehow emails are unreachable, the forgotten password is processed by the Client administrator and is referred to if the user is unable to reset the password due to a missing/incorrect email address in Cozone. The client administrator can order a new email address for the user by contacting a responsible consultant. This enables the user to use the above process to reset the password.


 Customer Data Access: The application has built-in business logic to handle access. In order to access customer data via a database directly, we have a confirmation process.


Customer data 

Applications in Cozone may be used to store personally identifiable information. We store only information, which is agreed with the client, outlined in the data processing agreement between the data process and client, required by local laws and covered by the EU GDPR.


Privacy and data retention

 

Azets does not own the customer/personal data, it only has the role of “data processor”.

For more information about privacy please follow:

https://login.azets.com/privacy

 

We are using data cleanup scripts that enforce the agreed-upon data retention policies.

 

Inactive users are automatically purged after 60 days from the inactivation date.


 In the Employee app, there are 3 types of data purging due to GDPR.

1.Personal data cleaner - removes personal data we do not need. It happens when 120 days have passed after an employee’s resign (resign date) and it removes:

  • Bank account
  • Social security number
  • Addresses and phone numbers
  • Next of kin and children
  • Payslips

2.  Payroll data cleaner - removes all time reporting data and transactions. It happens when the data is older than the selected number of years in the company setting. Targeted data for payroll data cleaner is:

  • Time reports and everything in them
  • Period transactions
  • Requests for absence
  • Exports of such data

3.  HR data cleaner - removes HR data once it's older than the selected company setting. Targeted data is:

  • Resigned employees
  • Exports of user changes

Payroll data is removed according to the company setting (5/10).

HR information is removed according to the company setting (5/10/15).


Incident Handling

 

The Cozone incident handler is informing the partners through their stakeholders about any incidents or planned downtimes. The partners are then handling communication with each customer according to their SLAs.

 

Physical Security


Our products are hosted on Amazon Web Services (AWS) in the Stockholm Region. Application is running simulations on all 3 availability zones. Under their shared security responsibility model, AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate. AWS is compliant with several certification bodies, for example, ISO 27001 and SOC 2. To read more about their physical security, you can refer to their latest whitepapers related to security on their security pages.

https://aws.amazon.com/security/

https://aws.amazon.com/compliance/data-center/controls/ 

We are using a multi-tenant model. All customers live on a single shared platform.

 

Network Security

 

We are using the AWS GuardDuty service. Amazon GuardDuty is a continuous security monitoring service that analyzes and processes the following Data sources: VPC Flow Logs, AWS CloudTrail management event logs, Cloudtrail S3 data event logs, and DNS logs. It uses threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify unexpected and potentially unauthorized and malicious activity within the AWS environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IP addresses, URLs, or domains. It also monitors AWS account access behavior for signs of compromise, such as unauthorized infrastructure deployments, like instances deployed in a Region that has never been used, or unusual API calls, like a password policy change to reduce password strength. 


As for DDoS protection, we are using the AWS Shield service, which defends against the most common, frequently occurring network and transport layer DDoS attacks that target the websites or applications. 

In order to prevent/monitor security attacks, the application is behind an application load balancer provided by AWS, and  IDS/IPS is handled by AWS. 

 

System Architecture

 

Services and infrastructure

Amazon RDS manages the work involved in setting up a relational database: from provisioning the infrastructure capacity to installing the database software. Once the database is up and running, Amazon RDS automates common administrative tasks such as performing backups and patching the software that powers the database.


 All applications and infrastructure are deployed as code meaning that we can recreate our infrastructure and application just by triggering a new release of everything which would take approximately 4 hours. We are using AWS RDS as a managed service for our databases. 


 

Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Amazon RDS uses several different technologies to provide failover support. Multi-AZ deployments for MariaDB instances use Amazon's failover technology.

 

In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups.


 Redundancy: Our systems are running on a high availability platform. Application servers are spread across 3 availability zones in the Stockholm region.


 Virus protection: Any uploaded or downloaded file is scanned for malware. 


 Backups: We have full cover backup routines which guarantee service continuance. We have established routines to take backups every day (during the night), every week, and every month. The backups are stored on encrypted hard drives using 256bit AES encryption and no backup tapes are used. Snapshots of all data are stored for 7 days.


 We are currently using the following AWS services: 

a. Storage: S3, EBS,

b. DB: RDS, Maria DB 10.3

c. Compute: EC2, ECR, VPC, Elastic Load balancing, EKS, Lamda

d. Management tools: CloudTrail, CloudWatch

e. Security, Identity & Compliance: AWS Identity and Access Management (IAM), Amazon Inspector, AWS Certificate Manager, AWS Key Management Service, AWS GuardDuty


Infrastructure Design

For our latest Infrastructure Design please visit: VPCs and Subnets and for more details about AWS RDS please visit: https://aws.amazon.com/blogs/database/amazon-rds-under-the-hood-multi-az/

 

For extended information about AWS and how their services are set up, please visit: https://aws.amazon.com/compliance/soc-faqs/ (Need AWS account to access)

 

Failover Process for Amazon RDS

 

In the event of a planned or unplanned outage of our DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone. The time it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds. 

 

Recovery of Data


 

To recover from a total database loss we can be back online within a maximum of 5 hours, during office hours, with a maximum of 24 h of data loss. Outside office hours we will do the best effort to recover but with no additional data loss.

 

Snapshot

If restoring from snapshot we have a maximum data loss of 24 hours.

 

Applications

RTO

Control, IDP, Portal

1.5h

Activities, Agreement, Drive 

3h

Employee

4.5h

 

 

Periodic Reviews

Server patches: We do a vulnerability scan on our servers once a week. Depending on the severity of the results, servers can be patched right away. 


 Penetration testing: Once a year, we perform penetration testing using external party services. 


Logging

 

  1. All requests to the Cozone applications are being logged to an access log. Sensitive data like session-id, access tokens are sanitized and not accessible for developers.
  2. User successful and failed authentication attempts are logged to the authentication log.
  3. Important data changes by any user are logged with a timestamp when they occurred to the audit log.
  4. All errors or warnings that occur in production are accumulated in the applications log.

 

Secure Development Cycle

 

Our applications are developed under the agile development framework SCRUM or Kanban. Development of new features includes code reviews and extensive testing. We cover different scenarios with automated unit and integration tests while regression and acceptance testing is done by QA specialists and Product managers.


 Acceptance testing and release approvals are conducted before every release by the development management. Any code change that does not fit acceptance criteria or fails on integration tests is not approved for release to production. 


 

There are development policies in place with peer reviews, automatic code sniffer, and policies to enforce certain coding standards. An architectural model is designed to support central input management including XSS filtering and session management.