Security SaaS API¶
The API image runs the .NET Core app directly inside the /app
folder. The same image takes care of compiling the code. It is also prepared to be cached, so some steps could be skipped if has not been changes from previous builds.
Building the image¶
The image must be built from the /Source
folder. The Dockerfile
is inside the Projects/Sequel.Security.SaaS
folder. Building the image does not require any parameters, but the appsettings.json
file will not be included in the image, this must be included either by creating a new image or using a volume or secret.
docker image build -f Projects/Sequel.Security.SaaS/Dockerfile -t security/saas-api .
Use BuildKit to build images
It is recommended to use BuildKit instead the classic build system to improve build times of the image and take advantage of advanced cache system that brings this new system.
Preparing the app settings¶
As mentioned before, the image does not include an appsettings.json
file, so it will always fail to run if no one is provided. The are three ways to add the settings into the container:
- Create a new image which adds the file inside
/app
(not recommended) - Create a volume that maps the file into the container (
-v
argument in docker) - Create a secret in Swarm or Kubernetes and put it in
/app/appsettings.json
There are also some settings that must be set taking into account the deployment environment:
- The
TrustForwardedHeaders
should be set totrue
if the API will be behind a reverse proxy/load balancer. - Ensure paths for Ansible are correct and point into container paths.
- Prepare Ansible variables for the playbooks (see below guide).
- The
hosts
Ansible file must contain a host that point to a server with access to the k8s cluster and has docker running (see below guide). - The
Database
context must point to the right SQL Server instance, using the right hostname (including.office.sbs
), and must use SQL Server Authentication login method (provideUser Id
andPassword
). Do the same for theLogging
context. - Ensure internal and external URLs for the API and Authentication services are OK.
- Check the email host to ensure it is accessible (health check may help).
- Check the RabbitMQ host to ensure it is accessible too.
- Check the health check (example command
curl -i localhost/health
)
Access to hosts services from Docker
To properly access the host services from inside a container, you must use special domains or IPs to accomplish that, and depends on the OS.
- Docker Desktop for Windows: use
docker.for.win.localhost
to access host services. - Docker Desktop for macOS: use
docker.for.mac.localhost
to access host services. - Docker on Linux: there are several ways to do it, but probably the easiest is to point to the external IP of the host. Another way is to use the gateway of the docker network (and ensure services are listening in this network interface too).
SQL Server on Windows
This will not work unless the SQL Server has several changes applied. First, as mentioned, the API will use SQL Server Authentication method for login, instead of Windows AD. This must be enabled to allow this kind of logins. To connect to the server, will use a TCP connection, and must be enabled too.
- Enable TCP connections in SQL Server.
- Enable SQL Server Authentication login in SQL Server Management Studio.
- If you cannot find the SQL Server Configuration Manager, see this link.
- Restart
MSSQL
service. - Then create a login using SQL Server Authentication method, and give that user permission to the databases.
Running the image¶
The best way to run the image is deploying it directly to a Kubernetes cluster using the Helm chart (see below).
Image run as non-privileged user
The image by default runs in a non-privileged user called saas
(1000:1000). In Linux, if you encounter any issues when running the image, ensure the appsettings.json
has read permissions or it is owned by that user. Under Windows or macOS, that cannot be changed and by default has 777
permissions (all permissions).
Running behind Reverse Proxies
Usually, in a reverse proxy environment, the published services will have path prefixes to have several services run under the same domain. To avoid any issues with URLs in Security SaaS API, it is recommended to fill the PATH_PREFIX
environment variable in the container and disable any path prefix strip in the reverse proxy, Security SaaS API will do the rest.
Prepare environment¶
Before running the API, either for local development or production environments, some bits are required to properly work.
Development environment
There will be a guide for this soon...
The SaaS API runs the scripts through a SSH connection to a host. So, first the target host must be configured to run the scripts.
- Ensure the server where the scripts are going to run has docker installed and working.
- Create a new user or pick an already existing one. The user must be in the
docker
group (sudo usermod -a -G docker $USER
). - Ensure to have installed
kubectl
CLI tool (see https://kubernetes.io/docs/tasks/tools/install-kubectl/). - Ensure to have installed
helm
v3 tool (see https://helm.sh/docs/intro/install/). - Ensure to have configured the target Kubernetes cluster - the file
$HOME/.kube/config
file configured to the cluster. - Log in in the registries inside that user (
docker login docker.sequel.com
). Do this with Helm too (helm repo add --username $NEXUS_USER --password $NEXUS_PASS sequel https://nexus.sequel.com/repository/helm
). - The server must have installed python 3 installed:
- On Debian/Ubuntu, they are installed by default.
- On CentOS/RHEL install these packages
sudo yum install python3 libselinux-python3
.
- Install
docker
python package:- On Debian/Ubuntu install
python3-docker
. - On CentOS/RHEL install
python36-docker
.
- On Debian/Ubuntu install
- Create a SSH key pair and add the public one into
.ssh/authorized_keys
from the user's home (remember that this file must have0600
permisions and owned by the user). Example command to create a SSH key pair:ssh-keygen -t ed25519 -C "sec-saas"
(when asked where to place it, change the path). - (optional) Change
sshd
to disable password authentication.
After preparing the user to run the scripts, it is time to prepare the database and user in the SQL Server. For this, we need to create a user with password, and give sysadmin
role (or equivalent role that can create users and databases). Then create a database which will hold the data for the API. This can be done using the MSSMS.
Prepare the deployment¶
Now it is time to prepare the Helm chart variables for the environment. This variables configures the deployment in Kubernetes as well as for the scripts. First create a yaml
file and fill with following options (this file will be refered as helm-vars.yaml
in this guide):
image:
# if using a different repository, change it here
repository: docker.sequel.com/security/saas-api
# tag: latest
# if ingress is going to be used, then enable this
ingress:
enabled: true
hosts:
- host: identity.office.sbs
paths:
- /saas
# if HTTPS is going to be used, fill this section with the secret and hosts
tls:
- secretName: security-saas-tls-cert # the secret must have this name, if shares the host with the security instances
hosts:
- identity.office.sbs
# this is the prefix it wil have under the reverse proxy/load balancer (should match the path in the ingress but with final slash /)
pathPrefix: '/saas/'
# change the hostname (host.office.sbs) and user to your needs
# this is the hosts inventory file contents
hostsInventory: |-
[me]
host.office.sbs ansible_user=saas
[all:vars]
ansible_python_interpreter=/usr/bin/python3
connectionString: Server=identitydb.office.sbs;Database=SecuritySaaS;User Id=sec-saas;Password=V3ryS()chP4ssw0rd;MultipleActiveResultSets=true
# put here several ansible variable files: the key is the name of the file and the contents are the contents of the file
# there are plenty of variables to change, they are all documented through the roles in the ansible playbooks
# below there is a basic configuration
ansibleExtraVariables:
vars:
# full path where to store some files for the instances (should be readable and writable by the user)
security_installations_folder: /home/saas/installations
# go connection string for the admin user
security_admin_connection_string: 'sqlserver://sec-saas:V3ryS()chP4ssw0rd@identitydb.office.sbs?connection+timeout=10'
# sql server address where the instances will be placed (must be the same as above variable)
security_sql_server_address: identitydb.office.sbs
# repository name for the Sequel Helm chart
security_helm_chart_repository: sequel
security_helm_chart_repository_type: Helm
# repositories where the security images are stored
security_admin_image: docker.sequel.com/security/admin
security_api_image: docker.sequel.com/security/api
security_authentication_image: docker.sequel.com/security/authentication
security_authorization_image: docker.sequel.com/security/authorization
security_tools_image: docker.sequel.com/security/tools
# in local development, this is not needed
security_authentication_settings:
singleSignOn:
cookieDomain: .office.sbs
# template for the user and databases for instances
security_database_template: 'saas-%s'
# external URLs for the ingress and security configurations (but as templates)
security_url_external_templates:
admin: https://identity.office.sbs/%s/admin
api: https://identity.office.sbs/%s/api
authentication: https://identity.office.sbs/%s/authentication
authorization: https://identity.office.sbs/%s/authorization
# put here the contents of the private SSH key file
sshPrivateKey: |-
-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----
With this configuration, we set up the environment for the ansible playbooks through the hosts
file (aka inventory) and preconfigured variables. This setup is important because the operations depends on these variables to be filled properly. See the readme for the Security SaaS chart (Source/Deployment/helm/sec-saas
in source code) to know more about these settings, including the readme for all roles (all folders inside Source/Deployment/ansible-playbooks/roles
in source code) in the ansible playbooks folder.
The deployment is done using a Helm chart, you need to copy the Helm chart from the source (or use it directly) for the deployment.
Deploy time¶
It is time to deploy! With helm command, the deployment is done with a simple command:
helm install -f helm-vars.yaml salamandra /path/to/the/chart/sec-saas
Dry run
If you want to ensure everything is fine before deploying, run the previous command but with some modifications:
helm install --dry-run --debug -f helm-vars.yaml salamandra /path/to/the/chart/sec-saas
RBAC and Roles
If deploying the chart fails due to unknown API version (rbac.authorization.k8s.io/v1beta1
) or something related to the Role
and RoleBinding
, then disable the RBAC from the chart (add this in the yaml file):
rbac:
create: false
To check if your cluster has RBAC enabled, run the command kubectl api-versions
and look for a line that looks like this rbac.authorization.k8s.io/v1beta1
. If there is not such line, then RBAC is not enabled and above lines must be included.
Namespace
It is recommended to use a different namespace in production, so all instances are put in the same namespace but it is its own namespace. In our production environment, we are using sec-saas
namespace. To create a namespace use kubectl create namespace sec-saas
.
To check out if it is running, run kubectl get pods
and look for the SaaS API Pod (something like salamandra-saas-api-64f9c8f464-tcnng
) then run kubectl describe pod salamandra-saas-api-64f9c8f464-tcnng
and you will see if it is working or not. If you get constants restarts due to bad health checks, add these two variables and redeploy using helm upgrade
insted of helm install
:
useSequelLogging: false
enableProbles: false
With these options, disables the health check probe from the deployment and puts all logs into the stdout
, which can be read with kubectl log salamandra-saas-api-64f9c8f464-tcnng
.
HTTPS and Certificate¶
To properly use HTTPS, a certificate is required. The platform expects to have a secret called security-saas-tls-cert
with the certificate and private key inside. The certificate and key must be in PEM format (text file that starts with --- BEGIN CERTIFICATE ---
). There is a script that helps you create the secret manifest to be able to deploy it in the cluster.
SaaS certificate generator script
This script creates a file called saas-cert.yaml
from a certificate and private key files. To run it, first create a file and fill it with the script contents. Then give the file execute permissions (chmod +x ./cert.sh
) and run it (./cert.sh identity.crt identity.key
).
#!/bin/bash
if [[ "$#" -lt 2 ]] || [[ "$1" = '--help' ]] || [[ "$1" = '-h' ]]; then
echo "$0 <CERTIFICATE> <PRIVATE KEY>"
exit 0
fi
if [[ ! -f "$1" ]]; then
echo "Certificate file $1 must exist"
exit 1
fi
if [[ ! -f "$2" ]]; then
echo "Private key file $2 must exist"
exit 1
fi
cat > saas-cert.yaml <<-EOM
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: security-saas-tls-cert
data:
EOM
echo -n ' tls.crt: ' >> saas-cert.yaml
cat "$1" | base64 -w 0 >> saas-cert.yaml
echo >> saas-cert.yaml
echo -n ' tls.key: ' >> saas-cert.yaml
cat "$2" | base64 -w 0 >> saas-cert.yaml
echo >> saas-cert.yaml