We have recently made numerous improvements to Feluda, following a security-first approach to allow long-term maintenance and active contributions. These approaches are not specific to Feluda, but can be applied to any software project. These articles are written for a technical audience, and we hope they help other projects learn how to implement these practices for a safer digital experience.
When building software projects, every developer should take ownership ensuring that the code they write is secure, before it can be accepted into the codebase. Though the idea is daunting, the learning curve has substantially reduced with current security tooling that also serve as teaching aids. This approach ensures that the responsibility of maintaining security does not fall solely on a separate cybersecurity team to fix bugs after the code is already in the repository. Hence, bug fixes occur earlier in the development pipeline, or more towards the left when the development cycle is visualized as starting from the left and ending at the right. Hence, this approach to secure code development is also known as a shift-left approach. The process is part of what the "DevSecOps" team aims to accomplish. As the name suggests, this includes development, security and operations. These teams have different objectives that are generally misaligned. Developers want faster development, operations folks desire stability, while the cybersecurity people want secure code. DevSecOps is guided by the CALMS framework - Culture, Automation, Lean, Measurement, and Sharing. The approach advocates for a culture shift towards collaboration, automating components for reproducible and stable systems, implementing the process in a lean manner for agility, measuring improvements from the processes implemented, and sharing knowledge to break down siloed teams for increased trust, agility and reliability.
We will now discuss how we implemented automations for Feluda within the development cycle for increased security, reduced technical debt, code robustness and stability.
Our goal was to ensure that insecure code does not enter the Feluda code repository. For that, we setup checks that run every time someone contributes code to Feluda i.e. the checks run every time a Pull Request (PR) is opened for any branch in the Feluda repository. This automated testing is setup using Continuous Integration (CI) pipelines. If the checks fail, the option to merge the changes into the codebase is automatically blocked. In Github, these are done through Github Actions Workflows. You can view these here in the Feluda repository.
It is good practice to have all tests running in the CI pipeline as part of pre-commit checks too. Pre-commit checks run every time a developer tries to commit code locally. If the checks fail, the local commit fails, forcing the developer to fix the bugs before waiting for the PR to fail. However, there are drawbacks to pre-commit checks. For one, pre-commit checks may silently fail to run when using a GUI tool to commit changes. More importantly, pre-commit checks rely on the assumption that the developer is not malicious and will not bypass these checks locally to commit changes. Hence, it is essential to have all checks running as part of the CI pipeline.
List of checks for a security-focused software project
A basic check to ensure submitted code is not buggy is to ensure that the existing features still work after we merge new code. For this, it is essential to have unit tests and integration tests with sufficient code coverage. Our team continued adding tests to every new and existing feature so that we could test that we do not break features during development. Further, we setup a Github workflow that runs tests for every PR created.
The next step is to add linting code as part of the CI pipeline. A language-specific linter checks for basic syntax and formatting issues. While linting does not indicate security issues, it does indicate poorly written code, which may be buggy right now or will eventually become buggy. Hence, it is essential to fix these issues early than accumulate technical debt that leads to potential future security and stability issues.
Most modern software use package repositories from which they source external dependencies. These include Gradle and Maven for Java, PyPi for Python and NPM registry for Node.js projects. Most package repositories maintain a database of known vulnerabilities in each version of the projects they host. These are not necessarily intentionally malicious packages, but include vulnerabilities that were discovered after product release. The language-specific tooling usually has auditing tools to check the packages you are using against the vulnerability database. You can typically check for these using, for example, the pip audit
command for dependencies sourced from PyPi or npm audit
for dependencies sourced from NPM registry. Vulnerability databases are usually downloaded and checks are run locally to prevent vulnerabilities in your code from being made visible to third-party servers. If security issues are found, the fix is usually as simple as upgrading the package to a newer version. The audit tools provide options for an automatic fix or more information if there is none. In case a fix is not available, you have the hard choice of moving to a more secure dependency or wait for an updated version with the fix. In either case, you now know that your codebase has known vulnerabilities. Dependency auditing is a basic check during code development. However, it becomes essential in a CI system since new code that is submitted may have either added a new known malicious package or downgraded an existing package to a vulnerable version. In Feluda, we run dependency auditing for every PR.
While dependency auditing is good practice, it is best practice to always use the latest version of all software tools and dependencies. This ensures that bug fixes are applied as soon as possible, protecting against undiscovered vulnerabilities and known stability issues. This is accomplished using Software Composition Analysis (SCA) tools that scan your codebase for all dependencies and can automatically create PRs for you to update your packages. These tools also alert you as soon as a new vulnerability is found in one of your dependencies. Dependabot is a tool by Github that is easy to setup for this purpose. Renovate is a good open-source alternative. When upgrading packages directly through PRs created by SCA tools, it becomes essential that you have unit and integration tests running in CI to ensure automated package updates don't break your software functionality. This is even more important when there are major version updates (which imply breaking changes).
An essential security tooling to use is one that scans your codebase for secrets. These could include cryptographic keys or passwords to your development account or production server. It is important to setup this tool early so that secrets are not intentionally or unintentionally committed to the code repository and leaked publicly. Secret scanning tools exist because this oversight is common. Github provides a secret scanning tool. A good open-source option is TruffleHog. In case you accidentally committed secrets to your repository, it's best to rotate your keys. A good practice to prevent accidentally committing secrets is to use environment files to store secrets during development and adding these files to your .gitignore
file to ensure these files are never uploaded.
The next step is to add a static analysis tool that will check your code for vulnerabilities. This is an essential part to automate "blue teaming" or hardening of your defenses. There are multiple open source SAST tools available. Some work across multiple languages while otheres are geared for specific languages. These tools have different false positive and false negative rates. They will usually assign a risk score to the found vulnerability, and provide information on where the vulnerability exists in code, why this is bad coding practice, how to evaluate if this is really a risk, and how to mitigate the risk. These tools may provide a CVSS score Common Vulnerability Scoring System or a CWE (Common Weakness Enumeration) for additional information to evaluate and fix issues. This is the step that can get daunting when initially starting to fix security issues. You need to be able to evaluate the risk, research on the optimal approach to apply a fix, and ensure that the fix does not add additional vulnerabilities to your code. However, it is important to understand that many security issues you may encounter will probably be low complexity issues and easy to fix for most developers with sufficient technical knowledge. It is ideal to have someone experienced to advise and review your code at this stage. In Feluda, we use the open source python-specific SAST tool bandit, to automate vulnerability scanning on each PR.
Feluda is a server-side project that gets deployed on a hosted production machine to function. Hence, we have a lot of Infrastructure-as-Code (IaC) files as part of our production deployment process. IaC files can include dockerfiles and kubernetes configuration files. It is important to ensure that IaC code itself does not have security issues and is following best practices. These issues can be discovered by tools built for scanning IaC code. For Feluda, we use the open source IaC scanning tool Trivy to scan for production configuration vulnerabilities. Fixing IaC vulnerabilities requires knowledge of hardening configuration files, and may require in-depth research.
Note: SAST tools and IaC vulnerability scanning tools keep getting updated with better scanning techniques and/or updated vulnerability databases. Hence, it is best practice to have these tools run weekly as part of a cron job in CI so new vulnerabilities in existing code can be detected.
We also use the OSSF Scorecard static supply-chain security analysis tool to measure the security posture of Feluda. This tool is able to suggest fixes to prevent supply-chain attacks, which are increasingly becoming common. The issues detected may require fixes as simple as pinning dependencies.
Finally - A software project can never be guaranteed to be 100% secure. It is also not possible to detect every vulnerability in a project by the core team alone. Hence, it is essential to provide a method for external researchers to securely submit security issues without fear of retaliation. For that, we have written a security policy for Feluda that can be accessed here. Writing a security policy requires a basic understanding of cybersecurity laws and how they function in practice, best practices in reporting security issues and vulnerability disclosure processes.
Securing code is a continuous process and not necessarily a technical one. The human element is essential as it requires building a collaborative environment, avoiding blame, learning from mistakes, and developing trust.