How to set up a CI/CD flow with GitHub Actions from scratch

  • Defining workflows in GitHub Actions allows you to automate linting, testing, building, and deployment based on events such as push, pull request, or tags.
  • A good CI/CD flow separates build, release, and deploy phases, using arrays, task files, and reusable actions to scale across many repositories.
  • Proper management of secrets, GitHub permissions, branch rules, and monitoring ensures reliable and secure deployments across different environments.

CI/CD workflow with GitHub Actions

If you work in software development, you've probably heard the saying "it works on my machine" but crashes in production. That's where a good workflow comes in. CI/CD with GitHub Actions It makes all the difference: you automate tests, builds and deployments and you stop crossing your fingers every time you release.

Throughout this article you will see, step by step and in detail, how Build a CI/CD pipeline from scratch using GitHub Actions, combining real-world examples: validation with ESLint, test execution, deployment with external APIs such as Kinsta, publishing to GitHub Pages, deployments to servers with SSH, integration with cloud environments, and design of reusable pipelines for large organizations.

Why is it worth setting up a serious CI/CD flow?

Beyond theory, a well-designed pipeline solves a very real problem: in collaborative projects, uncontrolled automatic deployments They can break production with a single poorly tested commit. Imagine that every push to the main branch deploys directly; if no one has run tests or linters, the risk of crashing the site is extremely high.

With a well-orchestrated CI/CD flow, all changes first go through automated checks: linters, unit tests, integration tests, static analysis, etc. Only when these checks pass is deployment to different environments allowed. This is key both in small projects and in organizations with hundreds of repositories and a lot of legacy code.

Furthermore, when your CI/CD infrastructure is scattered (legacy Jenkins on one hand, Concourse on the other, manual scripts, steps documented in a wiki… or in someone's head), maintenance becomes a nightmare. Centralizing everything in GitHubActions, which is a managed service integrated with the repository, greatly reduces operating costs and gives you a standard foundation to build upon.

What exactly is GitHub Actions and how does it fit into CI/CD?

GitHubActions It's the automation platform included in GitHub that allows you to define workflows in YAML files within the repository itself. These flows are triggered in response to events (push, pull request, tag creation, releases, cron scheduling, etc.) and execute jobs with several steps.

Github
Related article:
Everything you need to know about GitHub: what it is and how to use it

Each job runs in a runner (for example ubuntu-latest) and is composed of steps These can be shell commands or reusable Marketplace actions. With this model, you can do everything: from compile, run tests, and lint including deploying to Kubernetes, GitHub Pages, servers with SSH, serverless functions, or services like Kinsta through its API.

Key components of a workflow in GitHub Actions

When you define a CI/CD pipeline, you'll always be working with the same basic building blocks, so it's important to understand them clearly and apply them consistently across all your repositories. Avoid Frankenstein configurations which then nobody understands.

  • Workflows: YAML files in .github/workflows that describe automation: triggers, jobs, and steps.
  • Jobs: work units that run in a runner; they can run in parallel and be chained together using needs.
  • Steps: steps within a job; these can be actions (uses:) or commands (run:).
  • Runners: machines that run the jobs. They can be hosted via GitHub or self-hosted on your own infrastructure.
  • Actions: reusable pieces that encapsulate logic, both from third parties (Marketplace) and our own.

Furthermore, in advanced pipelines, the following often comes into play: strategy matrix (strategy.matrix) to test the same project in various environments (different versions of Node, Java, operating systems, etc.), accelerating feedback without having to duplicate configuration code.

First steps: preparing the repository and the workflow file

CI/CD workflow with GitHub Actions

The first step in setting up your pipeline is to create the folder .github/workflows in the root of the repo and inside a YAML file, for example build-test-deploy.ymlThat file describes the complete behavior of your CI/CD flow, from what triggers it to how it is deployed.

A very common pattern is that the workflow is activated in push y pull_request on the main branch, something like this main, to ensure that any changes to be integrated into the codebase go through the same quality filter.

In addition to "interactive" events like push or PR, GitHub Actions lets you define scheduled tasks with the key schedule and cron syntax. For example, you could run a daily maintenance task or backups at midnight with 0 0 * * * or prepare jobs that run every Monday at 8:00 UTC with 0 8 * * 1.

Understanding cron syntax in GitHub Actions

The programming part uses the typical syntax UNIX cronIt has five space-separated fields indicating minute, hour, day of the month, month, and day of the week. It's quite flexible, so it's worth mastering to automate recurring tasks within the pipeline.

  • Minute (0-59): at what minute is the job launched, for example 15 by minute 15.
  • Time (0-23): time of day in 24-hour format, for example 8 by 8:00.
  • Day of the month (1-31): specific day of the month.
  • Month (1-12): month number, for example 6 for June.
  • Day of the week (0-7)Sunday is 0 and 7, Monday is 1, etc.

You can also use special characters to get finer rules and have jobs that suit your needs without going crazy with more files.

  • *: any value in that field, for example * in minutes means "every minute".
  • */nintervals, such as * / 5 for "every 5 minutes".
  • ,: list of values, for example 1,15,30 in minutes.
  • -: ranges, such as 1-5 on a weekday (Monday to Friday).
  • ?: unspecified value, useful when you define the day of the month but don't care about the day of the week.

Design the continuous integration phase: lint, tests and quick checks

A good CI practice is that The initial checks should be quick. To get immediate feedback: linters, unit tests, and static analysis should fail as soon as possible if something goes wrong. Several real-world implementations follow this scheme, whether validating JavaScript/TypeScript code with ESLint and Jest, or Java projects with Gradle, JUnit, and analyzers like PMD, Checkstyle, or SpotBugs.

Lint job with ESLint as the first filter

A typical lint job in a Node/React or similar project is usually called something like eslint and it runs on Ubuntu. To cover different runtime environments, you can define a Node version matrix (for example 18.x and 20.x) and thus ensure that your code behaves the same in all supported versions.

The usual steps for this job include checkout of the code with actions/checkoutNode configuration with actions/setup-node (enabling cache of npm), installation of dependencies with npm ci (ideal for clean and reproducible environments) and running ESLint with the script defined in package.json, normally npm run lint.

The grace of giving a clear name for each step The thing is, when something goes wrong, you can identify at a glance which part of the pipeline is breaking down by reviewing the logs in the Actions tab.

Unit and integration testing job

After the lint, there's usually a job called something like testing that depends explicitly on eslint using needs: eslint, so that Tests are not run if the syntax is already broken.This job performs a checkout again, configures the appropriate Node version (e.g., 18.x), and installs dependencies with npm ci and throw npm run test o npm run test:unit depending on your stack.

In the case of a project Vue.js that deploys to GitHub Pages, for example, the pipeline first installs dependencies, then launches unit tests with npm run test:unitthen build the device with npm run build If everything goes well, and finally uses an action like peaceiris/actions-gh-pages to publish the folder dist on the branch gh-pages.

Continuous deployment: from checks to production (without dying in the attempt)

Once continuous integration is achieved, it's time to think about how automate deploymentThere are many variations here: from hosting platform APIs to GitHub Pages, traditional servers with SSH, or advanced cloud environments like GKE, GAE, or Cloud Functions.

Deploying via API: example with Kinsta

Platforms like Kinsta offer a REST API With this tool, you can manage deployments from your pipeline. The idea is to securely store the API key and application identifier, and launch a POST with curl to the appropriate endpoint to trigger a new version.

To do this, you first need to generate a API key From the MyKinsta panel, save it in a safe place and then configure it as secret on GitHub (for example) KINSTA_API_KEY). The same with the APP_ID of your application, which you can obtain by listing your apps with the API itself.

In the workflow, the deployment job is usually called something like deploy, relying on lint jobs and tests with needs: [eslint, tests] and execute a single step of curl using environment variables mapped to secrets: ${{ secrets.KINSTA_API_KEY }} y ${{ secrets.APP_ID }}In this way, the pipeline activates a deployment on Kinsta without exposing credentials in the code.

An important detail is that the deployment job can to be marked as correct simply because the API accepted the requesteven if the subsequent deployment fails. If you need to check the final state, you would have to add extra logic to query the result or hook into events from the platform itself.

Deployments to GitHub Pages with quality control

In the Vue.js example we discussed earlier, the goal is for the website to deploy to GitHub Pages only when the tests have been successfulThe flow is very direct: push to main, running tests, building and, if all goes well, using the GitHub Pages action to publish the folder ./dist.

So that the action can do push to the gh-pages branchYou need to go to the repository settings, section Settings > Actions > General > Workflow permissionsand grant permits Read and write al GITHUB_TOKENIf you don't, you'll run into a nice one. when attempting to deploy.

Deploy to your own servers using SSH and SCP

In environments where you continue to work with traditional servers (for example, shared hosting or a VPS), a very practical approach is to use SSH keys and actions like appleboy/scp-action to copy the build to the server from GitHub Actions.

The mechanics are simple: you generate a SSH key pair On your local machine, you add the public key to authorized_keys from the server, and you save the private key as a secret in the repository (for example STAGING_SSH_KEY o PROD_SSH_KEY). You also save the host and the user (STAGING_HOST, STAGING_USER, etc.).

In the workflow you can define a single file of deploy that activates in tag push, and use the function endsWith(github.ref, '-staging') o endsWith(github.ref, '-prod') to determine whether to deploy to staging or production. Each job installs dependencies, executes the build (for example) SvelteKit with npm run build), copies specific files such as .htaccess y robots.txt depending on the environment, and finally upload the folder build to the server via SCP.

This approach allows the deployment to be consistent and repeatableIn your day-to-day life you only do git pushAnd when you want to publish a new version, you create the appropriate tag (for example) v1.2.3-staging o v1.2.3-prod), letting GitHub Actions take care of everything else.

How to use Stable Diffusion 3 on PC
Related article:
How to use Stable Diffusion 3 on your PC: a complete guide

Design reusable and scalable pipelines for large organizations

In companies with many repositories, having a pipeline copied and pasted into each project is a recipe for disaster: Any global change involves editing dozens or hundreds of filesThat's where the reusable workflows and auxiliary tools such as Task, git-cliff or custom actions.

The idea is to extract the common logic from a dedicated CI/CD repository that exposes reusable workflowsand that each project only has a minimal YAML file that invokes them with certain parameters (language, artifact type, deployment environment, etc.). In this way, the backend and frontend services share the same base pipeline and only the inputs change.

Separate phases: build, release, and deploy

A fairly solid design draws inspiration from tools such as HashiCorp Waypointwhich clearly distinguishes between three phases: build, release y deployThis allows you, for example, to build and test quickly, generate artifacts and versions in a controlled manner, and then deploy with different strategies (blue-green, canary, etc.).

During the build phase, the following are executed: unit testing and parallel static analysis (highly recommended, even though it has a slightly higher CPU cost) to shorten the overall time. In Java projects, JUnit, PMD, Checkstyle, SpotBugs, and SonarCloud analysis can be run in parallel, while in Node projects, other commands are used, but the structure remains the same.

The release phase is responsible for build the device (Docker image, Java library, NPM package, etc.), label the repository with a new version, generate the changelog, and publish the artifact to the corresponding registry (Docker Registry, Maven, NPM, Artifact Registry, etc.). For version control, you can use a custom bash script that bumps the version and creates the tag, supported by tools like git-cliff to generate change lists, or actions such as release-action to create GitHub releases.

Finally, the deployment phase is specialized according to the runtime: G.K.E. for Kubernetes, Google App Engine, Cloud Functionsetc. Each type of deployment resides in a specific workflow, with its own inputs and shared steps such as the configuration of Google Cloud or sending notifications to Slack for auditing and visibility.

Taskfiles, arrays, and support actions

To decouple the pipeline from the specific language, many teams use Task (a modern alternative to Make) that defines project tasks in a Taskfile.ymlThe build workflow only needs to invoke the corresponding Tasks, and thus the same pipeline works for projects of Java, Node, TypeScript, etc. without branching out into a thousand conditionals.

In addition, they are created internal support actions For repetitive tasks: Java and Gradle configuration, task preparation, Google Cloud authentication, deployment to GKE, sending notifications to Slack, or Git release management. These actions reside in .github/actions and then they are used from the various reusable workflows, maintaining the CI/CD code modular and easy to evolve.

A rule of thumb in this approach is that Service repositories should only consume reusable workflows and not actions directly.Thus, any internal changes to the actions are controlled from the CI/CD repository and backward compatibility is maintained.

Best practices for governing the flow of changes

In addition to the purely technical aspects, a robust CI/CD pipeline must be accompanied by branch and protection policies on GitHub to ensure that no one bypasses the checks through the back door.

In the section Settings > Branches From the repository you can define protection rules for the main branch: require pull requests before merging, require that the status checks pass before merging (for example, the CI jobs we have defined), and configure additional options such as the minimum number of reviewers, direct push blocking, or allowed merge policy.

Under these rules, all changes have to go through the same CI/CD workflow and by a human review before entering main, which greatly reduces the risk of introducing major errors and improves the traceability of the change history.

Management of secrets, permissions and security

In any reasonably serious pipeline you're going to handle sensitive informationSSH keys, API tokens, database credentials, etc. GitHub Actions has a system for encrypted secrets at the repository or organization level that are referenced from workflows using the syntax ${{ secrets.NOMBRE }}.

The key is in Never embed credentials directly in the YAMLbut to store them in Settings > Secrets and variables > Actions and control who can modify those secrets. Additionally, it's advisable to periodically review token permissions (including the token's own). GITHUB_TOKEN) and use the principle of least privilege: only the access strictly necessary for each workflow.

Other useful measures include enabling GitHub tools such as code scanning y secret scanningIntegrate dependency security analyzers (e.g., OWASP for Java projects), and leverage integrations with external monitoring and logging platforms (Datadog, New Relic, Splunk, etc.) to gain visibility into what is happening in your deployments.

Monitoring, logging, and troubleshooting

Each execution of a workflow generates a detailed log which you can view from the Actions tab of the repository. These logs show the output of each step, times, errors, and any commands executed, allowing you to debug why a build, test, or deployment has broken.

In addition, you can supplement this information by integrating real-time notifications Using Slack or Microsoft Teams, when a build or deployment fails, you can include direct links to the corresponding log. This way, teams are notified immediately and can react without having to constantly check GitHub.

When you notice your pipeline starting to slow down, it's worth reviewing internal metrics such as average build time, success rates, duration of specific jobs and the size of the artifacts. Often, a simple optimization (introducing dependency caching, parallelizing tests, splitting overly large workflows into more specific ones) has a huge impact on the speed of feedback.

Quick methods to identify file type in Windows and Linux
Related article:
How to install and configure the Windows Subsystem for Linux 2 (WSL2)

With everything we've seen, you can build anything from a basic pipeline that only runs tests and deploys to GitHub Pages, to a complete CI/CD platform on GitHub Actions It orchestrates builds, releases, and deployments across multiple environments, integrates linters, security testing, notifications, and monitoring, and scales reasonably well for both personal projects and organizations with a long legacy and demanding compliance requirements. Share this information so more users can learn about the topic.