Sam Gabrail – Platform Engineer
Secure Secrets, Speedy DevOps, Simplified
You’re a platform engineer tasked with building a secure app template that generates production ready apps with all best practices baked in. But how do you protect secrets across GitHub, Kubernetes, Argo CD without slowing down your devs? And what about those insecure long lived credentials?
In this video, I’ll show you the developer’s workflow in action and then switch to a platform engineer’s view to reveal how it all comes together securely, end to end. Stay with me to see how it’s done. Let’s first dive into the workflow. Now before we get into the weeds, let’s take a look at the workflow of what we’re trying to achieve. First off, we have Port dot io, which serves as our front end Portal.
You’ll see a form with inputs that you’re going to enter as a developer to create a project, create a Python Flask application.
And what that’s going to do, it’s going to actually go out and trigger a GitHub actions pipeline through this self-service action defined inside of Port io. Now the GitHub actions pipeline will go step by step to create what we’d assigned it to create. So some of those tasks will be to authenticate and retrieve secrets from Akeyless and also to configure Akeyless.
Ultimately, we want to store some static secrets and dynamic secrets in Akeyless that we’ll want to use with our application.
And then what will happen is the actions pipeline will create a new repository for our developers to start to work on their application. Not only that, an Argo CD application will be created by our GitHub actions, and it’s going to get connected to this new repo that we created.
And inside of Argo CD, as part of the application, we will have a Python Flask app running with a MySQL database.
And this MySQL database will have credentials that are brought in from Akeyless. These are gonna be dynamic secrets, dynamic credentials, just in time credentials that the Flask app will use to connect to the database. This is a secure way of developing your applications and even running these applications in production. You don’t want any long lived credentials of any kind in your environment. The app will continue to retrieve secrets from Akeyless. First, it has to authenticate and it will use the authentication methods from Kubernetes.
So we’ll use the Kubernetes auth method in Akeyless to solve the secret zero problem of not putting any kind of long lived credentials for the application to authenticate into Akeyless. It will use that Kubernetes auth method and retrieve those dynamic secrets depending on the time to live is that you’ll configure for those dynamic secrets. In the demo, we’re gonna put in fifteen seconds just to show you how quickly the dynamic secrets expire and the app goes and retrieves new ones. And then finally, the logs come back from the GitHub actions. Once it’s completed, it will give you some logs in Port. As a developer, you can see the URL of your new repo. You’ll see the Argo CD dashboard imPort and a few other things that you can define there.
And then finally, you’ll have your developer start doing their work in this new repo. They’ll create a new feature branch and then make changes.
And after that issue a pull request that will trigger a test pipeline. And once that’s been approved, if you want to redeploy your application, you just need to give it a version tag and then push that tag back to the repo that will trigger Argo CD to redeploy the application with the new features that you’ve deployed. So that’s the workflow in a nutshell. We’ll dig deeper and see how everything works in the demo.
Let me now put on my developer hat, and I’ll walk you through this section from the point of view of a developer and the workflow of a developer. So keep that in mind as we go through this next section. We’re now in the Port. Io self-service hub, and we’re going to launch our application.
Again, I’m a developer, so I will see that, hey, this is a way to deploy a Python Flask app, and we’re gonna be using Keyless Argo CD. Let’s go ahead and create this app. I am greeted with a form where I can specify the app name, the repository name, and I will give it a name here, fifty three. And yes, I have run this over fifty times, and you just gotta be patient as a platform engineer.
It’s okay. You’re gonna run into a ton of errors as you’re automating. That is completely normal. It took me a bunch of hours to get this to work, but once you get it to work, then it’s gonna be available for all your developers in your organization.
So just don’t be discouraged. It takes time. Kubernetes namespace, Flask to do.
Database name is to dos, and I’m gonna get the root password for my SQL database from a path in Akeyless at this particular path. Alright. Go ahead and execute, and this will launch a run for us so we can see the progress of the run.
And very soon, the job links will appear here, and it just did. So clicking on this will take me to the GitHub actions job run, and this is really what’s happening behind the scenes to get this entire job to run and to work. This will take a little bit of time to get everything set up for me as a developer. And once it’s done, we’ll resume the recording, and I’ll explain to you what exactly took place. Looks like the job has completed. If we look back at the Port UI, we can see a few things. The status is success, created the Flask to do app with the Akeyless integration.
I can see, the app repo is right here.
I can see the service repo in Port.
We can take a look at that in just a second. And, the Argo CD dashboard as well in, Port also. Let’s take one at a time. Let’s go ahead and take a look at the repo itself.
So here’s my repo. As a developer, everything is set for me. There’s even instructions for initial setup that will follow in just a second.
And the second piece that we’re looking at is the Port entity.
So we can I can show you directly from the Port UI, or it can go straight to the link that was given to us? As you can see here, there is the entity Flask to do app fifty three. It’s a service blueprint.
And then finally, if we wanna see this on the Argo CD dashboard in Port as well, you can go like that, and you’ll see the status of the application. I see two applications here. One of them was during one of my tests, but, really, we have only one running. And I have a whole video on Port and how you can sync the app, roll back deployments, do all kinds of things with the integration between Port and Argo CD. So you can shield your developers from having to work with Argo CD or even Kubernetes.
So you can take it as far as you wish. But the topic of this video is to get you as a platform engineer to create a template of an application with best practices from a security point of view. We’ll see that in just a little bit with dynamic secrets and so on.
We can go back to our app here, and you can see the Flask to do app fifty three.
There are a couple of things I wanna show you. First, we’re gonna clone this and start developing. But before we do that, let’s take a closer look at what’s also running. If I go into the Argo CD dashboard itself and again, you don’t have to expose this to a developer, but if you do, you’ll see that there is an application running. We can see a deployment for MySQL, deployment for Flask.
We see two services, one for the app and one for the database, persistent volume claim for the database, and the config map for the database as well for an init database that creates a table and a few other things that we’ve defined. Alright. So once again, remember, I am wearing a developer hat.
As a developer, I can easily go ahead and clone this.
So we can go ahead and clone this, and I’m gonna open my favorite IDE. In this case, I’ll use, cursor.
Okay.
Let’s open that new window. Alright. So what do we have here? I have a few workflows, build and push workflow.
I have a PR validation workflow, as you can see. I have my app folder. I’ve got some templates.
I’ve got a an Akeyless integration dot p y. I got a routes dot p I p y. I got some tests as well.
I have a Docker file already, and I have Flask deployment and a MySQL deployment.
I have a PIP file as well to go ahead and start developing right away. And, of course, a readme requirements dot text and a run dot py to to get started and a services manifest also. Alright. So what we could do now is we can take a look at what’s happening at the Kubernetes level.
So once again, as a platform engineer, it’s up to you whether you wanna shield your developers from going down to the Kubernetes layer or not, depending on whether you choose to shield or not. You might need to add some integrations to Port with Kubernetes and so on, but let’s jump in real quick. I can see that now I have my namespace Flask to do and running Kubectl get all. I see the two pods, MySQL and Flask that are running, and I see my services and deployments and replica set. That’s exactly what we saw from Argo CD. Right? So Argo CD now is sitting there and will continuously deploy our application.
But, first of all, let’s go ahead and see what the application looks like. It’s running. Right? So let’s see what this look like, and then we’re gonna make a change and see how that all fits in. So what we can do is we can Port forward, Kubectl Port forward our application, for the service fast to do on Port eighty eighty on our local machine.
And from here, we can go back to our browser and look for local host eighty eighty.
And we have a simple to do list application. You can say hello to do or get the groceries or something.
You can delete.
So you have a few things that you can do with this application, just very simple, to show you what’s happening. Now in addition to this, before we make changes, let’s take a look at the logs. Let’s do that by running this on the side here.
Alright. Now let’s jump in and take a look at the logs. So if I run Kubectl get pods, I’m interested in the Flask to do pods. So let’s run logs like this, and now we’re seeing our logs.
And notice when I make a change, let’s say I’m gonna add, next to do, for example, add to do, and then maybe delete something, maybe delete something else. Notice what I have here. I have this log connection pool, initialized, refreshed with new credentials, and a user temp, and then a particular username. If I make a change, let’s add high, It will do that again.
You’ll see that it’s looking for it adds the connection pool once again. If I delete this, nothing happens. If I quickly add something, nothing happens. But if I wait fifteen seconds, which is a time to live that I’ve specified in Akeyless for dynamic secrets refresh.
So what does that mean? Well, what I did is I wanted to make sure that we are providing a secure application to our developers to work with from the very beginning.
And what’s happening is that we’ve designed this Python Flask app to retrieve dynamic secrets for the MySQL database so the web app can talk to the database with dynamic credentials, just in time credentials. What will happen is the application will see if it can access the database. If the credentials have expired, it goes out, refreshes the connection pool, requests new set of username and password for the database, and continues to go. So it’s been more than fifteen seconds. If I interact with the database again, it’s gonna get new, credentials. That’s how the application works.
And, just notice that this is fifteen seconds. Obviously, you’re not gonna be doing that. This is for demo purposes. You don’t wanna keep refreshing the connection pool that often.
So you’re gonna take your time, an hour, maybe a day, depending on how serious your application is and how secure you wanna make it. Alright. So now we see that the application is running. It’s using dynamic secrets, just in time secrets.
So it is running very securely.
Excellent. Now we wanna see what happens if we wanna make a change into our application. So we wanna, for example, change the to do list here. If we make a change in the index here from the h one tag of to do list, we’ll add something else, and we will work with the application.
So how do we do that? Well, first of all, let’s go back to our readme. We have instructions for our developers to follow here. So here are the instructions that we have for our developers that we’ve set in place.
So there is, just a couple of things we have to do to initially get started before starting our development, ensure the package access is configured. So we gotta navigate in GitHub to packages and select the Flask to do package, click package settings, or directly visit this link so we can go here.
And we need to add our repository to the, so the app fifty three, we need to add it to this package because we’ve already deployed the Docker containers into this package. We pushed it into this package and we wanna continue to use that same package.
Alright. So that’s what we’re trying to do. So we’re gonna go in here.
We’ll go straight to we gotta change the username here in length.
It’ll take us to the package. You can see here, manage actions, access. We wanna add a repository, and this is the fifty three, Flask app fifty three, add repository, and that’s it. We need to change the role to right so we can push to this repository.
Once we have that running, we can continue our development process. You can see all of this. So the development workflow, and you can define this however your organization runs. First thing, Git checkout, feature your feature name.
So we create a feature branch, make changes and commit them, push your branch, create a PR. And after the PR review approval, merge to main. And after verifying the changes in main, create and push a version tag. And then once you push the version tag, this will build and push a new version container image, updates the deployment manifest, triggers Argo CD to deploy the new version of your application.
So we can do that real quick. We can go here from cursor and from main, we can create a new branch and we can give it a name. Let’s say, title change, for example.
We can see that we are now on the title change branch, go into our index, and change the to do list to awesome to do list. So awesome to do list.
This already gets saved automatically.
We can commit that. Let’s go ahead and commit this and say awesome title change.
Commit that. Let’s publish our branch.
And we can go directly here and create a PR from within cursor or visual studio code, or you can do it in the UI as well. Let’s do it from here and see pull request from Sam Gabrail title change to Sam Gabrail main. Awesome title change. Let’s create that.
I should give it a good description, but for the sake of time, I’m skipping over these things. This is now an open pull request and you see there’s one pending check. So let’s show this check. If we go to GitHub actions here, you see that the PR validation validate job has run and refreshing this once again should show us that it has completed successfully.
One successful check again. Go to the details here. It will open it up in the browser. The the PR validation workflow has completed successfully, and all it’s doing is running some Python tests.
Right? When you’re developing, you always have testing in place, but I’m just showing you that this was all packaged as a platform engineer. We package all this for our developers that they can from day one, go ahead and get started with everything they need right there in place.
So let’s go back to the pull request.
We can see that everything has successfully been validated and we can merge our pull request successfully merged.
We can go ahead and delete our branch.
All right. And now we’re dropped back into the main branch. So going back here, let’s make sure we sync our changes.
And this definitely is not gonna be the same person who’s creating the feature branch. The person on main right now could be, a different person altogether. Maybe a senior engineer is now on the main branch. And the next step, as we saw in the readme, is that we’re gonna create a new tag for our application.
So what you could do is hit f one and say create tag, and let’s give it version zero dot zero dot two, for example.
And we can go ahead and push this tag.
Okay. And once you push the tag, if you go into pipelines and look at our build and push Flask app, you see that there’s a pipeline already running now. This pipeline is building and pushing and doing a bunch of things. Let’s collapse this, make this a little bit bigger.
See what was happening here. I’m still an application developer here. I’m wearing that hat, setting up a job, I’m running the actions, extracting versions from tag, setting up builder x to build the Docker image, logging into the GitHub container registry.
We’re building the and pushing the Docker image, updating deployment manifests, and so on. So all this has successfully been pushed and deployed. And now what we should see in Argo CD is that we should be able to see a new deployment that has been deployed. You can see that it is out of sync, and it quickly synced because we have auto sync running. So that just went really quickly.
Now, if we go back and let’s stop this connection, let’s recreate the connection here and let’s see what’s happening with our deployments. So I have to run Kubectl get all. We see twenty four seconds already. The new pod has been deployed. You can of course see this in Argo CD as well. So let’s go back and look at our to do list.
And there you go. Awesome to do list. Now that’s pretty awesome. Now we saw how as a developer, I was able to pick up the application where it was. As a platform engineer, you’re building this application. This is a template of how we build Python Flask applications in our organization.
Any new developer that needs to get started can just launch it as we saw from the Portal. From there, everything is built, ready to go, and you can see the seamless experience the developer can start with. This can take weeks, if not months, to get an application ready for a developer to get onboarded. This is the power of platform engineering getting you started really fast.
Alright. I’m switching hats. And now I’m a platform engineer showing you behind the scenes what happened and how we were able to create this whole workflow for our developers. So for this next section, I’m a platform engineer.
Let’s now jump behind the scenes and see what happened on the Akeyless side of things and how we were able to power this whole workflow with Akeyless.
So if you look at my items here, we have a demos folder. And inside the demos folder, I have a bunch of folders. I have an Argo CD folder where I have my server static secret. I have a user credentials password to log in to Argo CD.
I also have a GitHub personal access token, which is also a static secret. I also have a Port folder where I have my credentials, also static secrets for accessing Port.
And then we have the MySQL root password, and that’s the one that we started with to get the deployment started so that we can log into that mice database and then generate dynamic secrets in it. And then you can see finally the MySQL root dynamic secret here, which is what the application is calling on and getting dynamic secrets. So if you click get dynamic secrets, you can see real quick dynamic secrets and expires really quickly in fifteen seconds. Right?
And you can see all those temporary credentials. If they’re still available, they’ll appear here. You see this one is almost done. So that’s what we we were doing in the application.
And, of course, it is configurable. Right? You can choose to make it a lot longer than fifteen seconds. It’s pretty short, the TTL here. Right?
Cool. I am right now, I’m in the gateway.
And in the gateway, we’ve created a target. And what the target allows to do is to to create the dynamic secret easily. So here under demos, I have my secret root password target. This target is really the, the, the connection details.
It’s tied to the database in my Kubernetes cluster. It talks to this database and it allows us to create those dynamic secrets.
And from there, we refresh every fifteen seconds as we saw.
Now the next thing I wanna show you is the authentication piece, which is very imPortant, and this is the GitHub authentication.
And this is what allows the GitHub pipeline to authenticate into a Keyless, seamlessly solving the secret zero problem. So I’m not putting any access ID or access secret key inside of the pipeline.
It’s using the JWT authentication method. From here, you can see the the configuration for the JWT. The unique identifier is the repository.
I can see the associated roles with it. And here you can see the different roles, the demos. Demos is the role that we’re interested in. You can see the subclaim is the repository equals the Sam Gabrail and then Akeyless platform engineering Port.
That’s the actual repository where the GitHub actions is running to build everything for us. Right? So that’s the role. And in this role, we can see that we have a few authentication methods tied to this role.
GitHub auth is one of them. Yeah. An API key that I was testing with, that was part of it as well. And also the authentication method, that is of type Kubernetes that allows the Python application to talk to Akeyless and retrieve the dynamic secrets.
Okay. So that’s that’s very imPortant as well.
Okay. And the rules here is basically it’s open for all the demos slash everything under the demos path for items.
For items is basically for secrets, the dynamic secrets that we talked about and the static secrets, and also for targets. This is what allows the pipeline to create targets for us.
Now, the other thing I want to show you is the actual pipeline itself, because that’s where all the magic happens. If we jump into our pipeline here, you can see this is the Keyless platform engineering Port repository where we have everything defined for us. And we can see under workflows, we have one workflow.
This is the workflow that is dispatched from Port. If you recall in the very beginning of the video, we created the Python app and then we gave it a bunch of inputs. The app name, the namespace, database name, MySQL secret name, repo name, Port payload, and, and so on.
From here, we’re only defining the Akeyless access ID. So this is key. If we go into the GitHub actions tab and look at our secrets, we only have the Akeyless access ID secret here. That’s the only thing we’ve put into the pipeline. All other secrets are stored in Akeyless itself. The pipeline goes out and reads those secrets as it goes along.
So here we’ve got our jobs and we have a bunch of jobs in here.
Just notice the permissions are imPortant.
So content, we’re allowed to read, packages, we’re allowed to write, and ID token, allowed to write to interact with Akeyless. But also the runner here, I’m running this on my desktop. The reason for that is my desktop has access to myAkeyless gateway. And theAkeyless gateway is not exposed out to the Internet, which is the way you should have things running in your own organization. So that’s why I have a runner that is self hosted running in my environment. It’s basically my desktop that allows me to connect directly to the Akeyless gateway. That’s how we’re securing everything right there.
Steps, we’re checking out the code. We’re configuring the Akeyless CLI to get some secrets, as you can see.
First of all, get the GitHub actions OIDC token. This is what’s gonna allow us to authenticate into Akeyless using JWT from within. So the JWT token from within GitHub itself. We can see the end actions ID token request token and URL, storing that in the GitHub environment.
This will be stored in the environment inside of our runner. Getting the JWT token, we use this curl command, then authenticate with Akeyless using jot. We can see that we’re echoing the two imPortant environment variables, the gateway TLS certificate, and the actual URL of the gateway. These are very imPortant to be able to get access to the gateway.
ExPorting the Akeyless token, as you can see here, we’re getting the token. We’re using the Akeyless auth CLI command with the access ID of that secret access type is JWT. And then the JWT token that we just got from over here from GitHub itself, and we’re passing that on as a JSON, and we’re querying for the dot token that we can use later. And again, it’s Akeyless token that we’re exPorting that to.
Then we’re getting the Argo CD static secret and we’re passing in the token as a command line argument for every command that we’re gonna run to talk to Akeyless at this point from within the GitHub pipeline. Okay. So that’s imPortant. So we can see we’ve passed it in here and we’re able to get the Argo CD server.
We specify the name of the secret with the path, did the same for the Argo CD credentials, the Port credentials, also the personal access token for GitHub because we’re gonna create, remember, a new repository and do a bunch of things with GitHub actions to talk to GitHub’s API.
Setting up Docker build, logging into GitHub’s container registry, we’re storing the Docker images inside of GitHub’s package or container registry. So you can see the registry name and then the username and password. We’re getting that directly here.
And then name build and push Flask app image. So we’re going to use this particular action to build and push. And then creating the new repository. We are using the GitHub token with the personal access token that we got earlier from Akeyless.
And we’re running this script here, And this is creating a private as you can see, we’re creating a private repository in GitHub, and then retrieving the MySQL root password and creating Kubernetes secret. Alright? So we’re doing that in the beginning. We’re getting that secret from Akeyless. As you see here, get the secret. It’s a static secret to get our root password that we’re gonna use for our database.
Creating a namespace if it doesn’t exist.
Create the secret in the namespace.
We’re dropping this as a secret in Kubernetes.
And notice the environment here. We’re usingAkeyless gateway URL and Akeyless trusted TLS certificate file as well. And then prepare and push manifests.
So we’re getting everything set up to push our manifest that we have here, as you see on the left hand side, into the newly created repo. So you can see we’re cloning and then we’re initializing the new directory and then we’re gonna do a bunch of copies down here. So copying files from parent directory, copying the Kubernetes manifests.
We’re copying the app folder. We’re copying the tests folder, the Docker file, requirements dot text, PIP file, PIP file lock, run dot py.
And we’re actually updating the readme. So removing the the old readme from from this repo and creating the new one, which is this one here that we saw in the beginning.
Then we’re gonna create the GitHub actions workflow directory.
So we’re making a directory here for our workflows, and we are pushing in the workflows that are gonna be inside that new repo for our apps, for our developers.
So you can see that under Flask to do app.
Here we go. Dot GitHub dot workflows, and it’s the build and push workflow and the PR validation for the the tests for Python when you’re running a pull request. Alright. That’s being pushed over as well or copied over as well. Updating the image reference in deployment.
So we’re gonna run a bunch of set commands to replace some of the placeholders that we have in place to get things going. So for example, we see this flask to do latest to flask deployment dot YAML, and that is inside of our Kubernetes manifest, the flask deployment dot YAML, for example. We see it here. This is the deployment, and we have a few placeholders in place here that we are going to replace.
And doing the same down here also, and putting in the app name, the name space. And we got all this information from the inputs, from the developer, from the Port form that we saw in the very beginning. Right? So now we’re deploying that into our manifests. Now we’re committing and pushing to our repository. Everything looks good. Then we’re gonna create a Kubernetes secret for the gateway certificate.
As you see here, this is the gateway certificate to talk to the Akeyless gateway.
And then we’re logging into Argo CD. We have the Argo CD credentials that we grabbed in the beginning from Akeyless.
So we’re doing that and then we’re registering the repository in Argo CD.
Okay. So registering this new repository that we created for our developer, and then we’re creating an Argo CD application.
We’re referencing the app name here, and then we’re referencing the repo that we just created. And you can see the path is the root path. This is where the manifests live, which is the fast deployment MySQL and services. We copy that into the repo for the application for the developer at its root path.
The destination server and Argo CD actually lives in the same cluster as our application. As you can see here, the destination server is the same, which is actually a recommendation. Keep your Argo CD in the same cluster where your application is running so that you don’t traverse different clusters from a security point of view. It’s more secure.
Your destination namespace, your project is default. Sync policy is automated. Sync options create namespace is true.
Upsert n g r pc web, and then create the Akeyless target. Now we’re gonna create a target because we wanna create a dynamic secret. Before creating a dynamic secret, we’re creating a target. So this is the command Akeyless target create database.
This is what we’re going to name it and the token that we’re using and it’s for MySQL database type MySQL. And as you can see the password, which is the MySQL root password and the host, the username, the Port, the database name, and we’re providing the environment variables that we’re used to by now. And then finally, creating a key was dynamic secret. Now that we created the target, now we can use that target to create the dynamic secret, and here’s the command to do so.
That’s pretty straightforward. And then triggering the Argo CD sync after all is done, we’re running Argo CD app sync and we’re syncing our app, make sure everything is good to go. Finally, we’re notifying Port. So we’re sending the log messages that you saw when we ran.
That’s just the feedback that we get back as an application developer of, you know, where is my app running? What’s the new app repo created if you forgot, and the service repo where we can find this inside of Port and also the Argo CD dashboard inside of Port.
And that’s it. It’s a pretty big workflow and it took me quite a bit of time to get this to finally work perfectly. But again, when you’ve done this once, you can create other application templates like a Node. Js app or a Golang app and use the same concepts we talked about. Here they are in summary. Number one, whenever possible, use dynamic secrets that are short lived. Number two, solve the secret zero problem by using a platform that has an identity to authenticate into, such as JWT tokens for GitHub and Kubernetes off into Akeyless.
Number three, use native integration with Akeyless in your app template that you deliver to your developers. Number four, implement a retry mechanism in your apps to request new dynamic database secrets if it fails to connect to the database due to expired credentials.
A final note, developers are building application features and business logic. They shouldn’t worry about infrastructure such as Kubernetes, Argo CD, and Akeyless.
Secure best practices for infrastructure is our job as platform engineers, and I hope you saw that in this video. Thanks for watching.