Bitbucket pipeline documentation

Bitbucket pipeline documentation DEFAULT

CI/CD tools are an integral part of a software team’s development cycle. Whether you’re using it to automate tests, a release process, or deployments to customers, all teams can benefit by incorporating CI/CD into their workflow.

Bitbucket Pipelines is CI/CD for Bitbucket Cloud that’s integrated in the UI and sits alongside your repositories, making it easy for teams to get up and running building, testing, and deploying their code. Teams new to CI/CD all the way through to those with sophisticated delivery and deployment pipelines

Easy setup

Teams new to CI/CD or familiar with setting up their own CI servers will appreciate how easy it is to get started with Pipelines. It’s a 2-step process to configure a pipeline and there’s a number of templates for languages available to get started. And because Pipelines is a cloud-native CI/CD tool you never have to worry about provisioning or managing physical infrastructure, meaning more time focusing on other priorities.

Integrations

We know every team has a different way of working and this extends to the tools they use in their workflow. With Pipes it’s easy to connect your CI/CD pipeline in Bitbucket with any of the tools you use to test, scan, and deploy in a plug and play fashion. They’re supported by the vendor which means you don’t need to manage or configure them and, best of all, it’s easy to write your own pipes that connects your preferred tools to your workflow.

There are currently over 60 pipes offered by leading vendors such as AWS, Microsoft, Slack, and more. Learn more about our integrations and get started.

Deployments

For those looking to implement a more mature CI/CD workflow that involves a release process or deployment to environments, Pipelines offers a robust set of deployment functionality that provides teams the confidence and flexibility to track your code from development through code review, build, test, and deployment all the way to production. For more sophisticated workflows you can create up to 10 environments to deploy to, and see what code is being deployed where via the deployment dashboard.

Learn how to set up Bitbucket Deployments.

Increased visibility and collaboration

Visibility into what’s going on and what’s been deployed to customers is vital to all teams. Pipelines has integrations with tools like Jira, Slack, and Microsoft Teams that provides context on your builds and deployments right where your team plans and collaborates. For collaboration tools like Slack it’s easy to see what’s happening with your CI/CD tool and act on it too.

When integrated with Jira Software, Pipelines provides visibility for everyone who works inside Jira from backlog all the way through to deployment, surfacing deployment and build status in Jira issues as well as which environments a Jira issue has been deployed to as well.

Pricing

Pipelines pricing is based off a simple, consumption-based model of build minutes used, and every Bitbucket plan includes build minutes. Unlike other cloud vendors we don’t charge for concurrency, meaning you don't pay extra to follow CI/CD best practice and run your pipelines steps as fast as you can.

Plan typeBuild minutes per month
Free50 minutes
Standard2500 minutes
Premium3500 minutes

If you’re wondering where your team might stand when it comes to build minutes usage, we typically see small teams with fast builds using about 200-600 minutes. Head here to learn more about build minutes and how they work.

Get started with CI/CD today

Every team should have a CI/CD tool as part of their development toolchain, whether you’re simply interested in automated testing or looking to create sophisticated deployment workflows.

Whatever your requirements may be, a tool like Pipelines is perfect for your needs and it’s free to get started!

For a step-by-step tutorial of how to set up Pipelines for your team, head on over here.

Sours: https://bitbucket.org/blog/an-introduction-to-bitbucket-pipelines

Bitbucket Pipelines is an integrated CI/CD service, built into Bitbucket. It allows you to automatically build, test and even deploy your code based on a configuration file in your repository. Essentially, containers are created in the cloud and inside these containers you can run commands (similar to how you might on a local machine) but with all of the advantages of a fresh system that is configured for your needs. 

The following includes information about using Provar and Bitbucket Pipelines together. 

Configuring your Provar project


To set up Bitbucket Pipelines, you need to first create and configure the bitbucket-pipelines.yml file in the root directory of your repository.

You also need to configure the Provar project along with the other required files in order to publish it on the Bitbucket repository.

  • ProvarProject: It contains the files built in Provar such as test cases, the src folder, build.xml files, etc.
  • Provar License: This includes the .license folder containing execution licences.

Build.xml configuration


Edit the below information in the build.xml file.

  • provar.home: Thisvalue is the path of the ProvarHome folder which contains the latest ANT libraries
  • testproject.home: This value is the Provar project root in your repository
  • testproject.results: The Provar results directory in your repository
  • license.path: This is the path where the .license folder is located
<project default="runtests">     <property environment="env"/>     <property name="provar.home" value="${env.PROVAR_HOME}"/>     <property name="testproject.home" value=".."/>     <property name="testproject.results" value="../ANT/Results"/>     <property name="secrets.password" value="${env.PROVARSECRETSPASSWORD}"/>     <property name="testenvironment.secretspassword" value="${env.ProvarSecretsPassword_EnvName}"/>     <taskdef name="Provar-Compile" classname="com.provar.testrunner.ant.CompileTask" classpath="${provar.home}/ant/ant-provar.jar"/>     <taskdef name="Run-Test-Case" classname="com.provar.testrunner.ant.RunnerTask" classpath="${provar.home}/ant/ant-provar.jar;${provar.home}/ant/ant-provar-bundled.jar;${provar.home}/ant/ant-provar-sf.jar"/>     <taskdef name="Test-Cycle-Report" classname="com.provar.testrunner.ant.TestCycleReportTask" classpath="${provar.home}/ant/ant-provar.jar;${provar.home}/ant/ant-provar-bundled.jar;${provar.home}/ant/ant-provar-sf.jar"/>         <target name="runtests">             <Provar-Compile provarHome="${provar.home}" projectPath="${testproject.home}"/>             <Run-Test-Case provarHome="${provar.home}"                 projectPath="${testproject.home}"                 resultsPath="${testproject.results}"                 resultsPathDisposition="Increment"                 testEnvironment=""                 webBrowser="Chrome_Headless"                 webBrowserConfiguration="Full Screen"                 webBrowserProviderName="Desktop"                 webBrowserDeviceName="Full Screen"                 excludeCallableTestCases="true"                 salesforceMetadataCache="Reuse"                 projectCachePath="../../.provarCaches"                 testOutputlevel="WARNING"                 pluginOutputlevel="WARNING"                 stopTestRunOnError="false"                 secretsPassword="${secrets.password}"                   testEnvironmentSecretsPassword="${testenvironment.secretspassword}"                 invokeTestRunMonitor="true"                 >                     <fileset id="testcases" dir="../tests"></fileset>         </Run-Test-Case>     </target> </project>

Creating a project in Bitbucket


Step 1: Log in to your Bitbucket account.

Step 2: Create a new repository and list the project name, repository name and visibility level.

Step 3: Push the project configured above to the repository.

Configure your pipelines


Step 1: Go to Pipelines.

Step 2: Change the template to Other. Now we need to configure the bitbucket-pipelines.yml file.

Here is an example:

image: atlassian/default-image:2 pipelines: default: - step: services: - docker script: - apt-get update && apt install wget unzip xvfb - wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - - echo 'deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main' | tee /etc/apt/sources.list.d/google-chrome.list - apt-get update - apt-get install google-chrome-stable -y - export PROVAR_HOME="$BITBUCKET_CLONE_DIR/ProvarHome" - mkdir $BITBUCKET_CLONE_DIR/ProvarHome - curl -O https://download.provartesting.com/latest/Provar_ANT_latest.zip - unzip -o Provar_ANT_latest.zip -d ProvarHome - rm Provar_ANT_latest.zip - cd ${BITBUCKET_CLONE_DIR}/test/ANT - xvfb-run ant -f build.xml artifacts: # defining the artifacts to be passed to each future step. - test/ANT/Results/*

Explanation of the sample script above


First, we need to mention the base docker image – Provar supports Java version 8 and test case execution requires ANT. So, we have taken the official image of atlassian/default-image:2. We can also use frekele/ant:1.10.3-jdk8 image which already have Java 8 and ant

PROVAR_HOME is the path of the folder which contains the latest Provar ANT files. This is referenced in the build.xmlprovar.home property.

As we need to execute our UI test cases on a browser which is why the Chrome installation is included. To execute test cases in headless mode, we also need to install xvfb. Before executing the actual test script section, install xvfb and run the xvfb service. Execute your test cases using the xvfb-run ant -f build.xml command.

Reports and artifacts


Step 1: To get the reports folder as artifacts in Bitbucket Pipelines, just add the following in bitbucket-pipelines.yml.

artifacts: # defining the artifacts to be passed to each future step.     - test/ANT/Results/*

Step 2: Now commit the file. This will create a file named bitbucket-pipelines.yml in your Bitbucket repository.

Step 3: Go to Pipelines. You can see your pipeline running.

Step 4: Click on the Artifacts Tabs. You can download the artifacts here.

Parameterization using environment variables


Parameterizations can be used to execute the following:

  • Using the secrets and environments password
  • Adding data to the build.xml file at run time

To add variables to the repository, follow the below steps:

Step 1: Click the Repository Settings.

Step 2: Add a variable for the secrets password. Mark it as secured which will mask the value.

Step 3: Add a variable for the browser as well. This will help you to define the browser where the execution will be performed.

You can access the variables using the ${env.VARIABLENAME} format in the build.xml file as shown in the screenshot below.

Parallel testing


You can achieve parallel testing by configuring parallel steps in Bitbucket Pipelines. Add a set of steps in your bitbucket-pipelines.yml file in parallel block. These steps will be initiated in parallel by Bitbucket Pipelines so they can run independently and complete faster.

Here is an example:

image: "frekele/ant:1.10.3-jdk8" pipelines: default:     - step:             name: No parallel tag             services:               - docker             script:               - echo "Without parallel Tag"     - parallel:             - step:                 name: Parallel 1                 services:                   - docker                               script:                   - apt-get update && apt install wget unzip xvfb                   - wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -                   - echo 'deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main' | tee /etc/apt/sources.list.d/google-chrome.list                   - apt-get update                   - apt-get install google-chrome-stable -y                   - cd test/ANT && xvfb-run ant -f build.xml                 artifacts: # defining the artifacts to be passed to each future step.                           - test/ANT/Results/*             - step:                 name: Parallel 2                 services:                 - docker                 script:                 - apt-get update && apt install wget unzip xvfb                   - wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -                   - echo 'deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main' | tee /etc/apt/sources.list.d/google-chrome.list                   - apt-get update                   - apt-get install google-chrome-stable -y                   - cd test/ANT && xvfb-run ant -f build.xml                 artifacts: # defining the artifacts to be passed to each future step.                           - test/ANT/Results/*

Job scheduling


You can schedule your jobs in Bitbucket Pipelines. 

Step 1: Go to Pipelines.

Step 2: Click on Schedules.

Step 3: Select your branch, pipeline and the schedule (i.e., Hourly, Weekly or Daily). This will schedule your run as per the configuration.

Executing BitBucket Pipelines using a REST API


If you would like to execute the pipeline from any external application or release management tool like Copado, Flosum, etc., you can use webhooks. For more information about incoming webhook requests, please refer to the following:

Review Provar on G2
Sours: https://www.provartesting.com/documentation/devops/continuous-integration/bitbucket-pipelines/
  1. Auth0 jwks
  2. Quickbooks desktop 2021
  3. North carolina aquifers

Bitbucket Pipelines

What you'll learn

  • How to run Cypress tests with Bitbucket Pipelines as part of CI/CD pipeline
  • How to parallelize Cypress test runs withins Bitbucket Pipelines

With its integrated integrated CI/CD, Pipelines, Bitbucket offers developers "CI/CD where it belongs, right next to your code. No servers to manage, repositories to synchronize, or user management to configure."

Detailed documentation is available in the Bitbucket Pipelines Documentation.

Basic Setup

The example below shows a basic setup and job to use Bitbucket Pipelines to run end-to-end tests with Cypress and Electron.

To try out the example above yourself, fork the Cypress Kitchen Sink example project and place the above Bitbucket Pipelines configuration in .

How this works:

  • On push to this repository, this job will provision and start Bitbucket Pipelines-hosted Linux instance for running the pipelines defined in the section of the configuration.
  • The code is checked out from our GitHub/Bitbucket repository.
  • Finally, our scripts will:
    • Install npm dependencies
    • Start the project web server ()
    • Run the Cypress tests within our GitHub/Bitbucket repository within Electron

Testing in Chrome and Firefox with Cypress Docker Images

The Cypress team maintains the official Docker Images for running Cypress locally and in CI, which are built with Google Chrome and Firefox. For example, this allows us to run the tests in Firefox by passing the attribute to .

Caching Dependencies and Build Artifacts

Per the Caches documentation, Bitbucket offers options for caching dependencies and build artifacts across many different workflows.

To cache , the npm cache across builds, the attribute and configuration has been added below.

Artifacts from a job can be defined by providing paths to the attribute.

Using the definitions block we can define additional caches for npm and Cypress.

Parallelization

The Cypress Dashboard offers the ability to parallelize and group test runs along with additional insights and analytics for Cypress tests.

Before diving into an example of a parallelization setup, it is important to understand the two different types of jobs that we will declare:

  • Install Job: A job that installs and caches dependencies that will be used by subsequent jobs later in the Bitbucket Pipelines workflow.
  • Worker Job: A job that handles execution of Cypress tests and depends on the install job.

Install Job

The separation of installation from test running is necessary when running parallel jobs. It allows for reuse of various build steps aided by caching.

First, we break the pipeline up into reusable chunks of configuration using a YAML anchor, . This will be used by the worker jobs.

Worker Jobs

Next, the worker jobs under that will run Cypress tests with Chrome in parallel.

We can use the YAML anchor in our definition of the pipeline to execute parallel jobs using the attribute. This will allow us to run multiples instances of Cypress at same time.

The complete is below:

The above configuration using the and flags to cypress run requires setting up recording test results to the Cypress Dashboard.

Using the Cypress Dashboard with Bitbucket Pipelines

In the Bitbucket Pipelines configuration we have defined in the previous section, we are leveraging three useful features of the Cypress Dashboard:

  1. Recording test results with the flag to the Cypress Dashboard:

  2. Parallelizing test runs and optimizing their execution via intelligent load-balancing of test specs across CI machines with the flag.

  3. Organizing and consolidating multiple calls by labeled groups into a single report within the. Cypress Dashboard. In the example above we use the flag to organize all UI tests for the Chrome browser into a group labeled "UI - Chrome" in the Cypress Dashboard report.

Cypress Real World Example with Bitbucket Pipelines

A complete CI workflow against multiple browsers, viewports and operating systems is available in the Real World App (RWA).

Clone the Real World App (RWA) and refer to the bitbucket-pipelines.yml file.

Sours: https://docs.cypress.io/guides/continuous-integration/bitbucket-pipelines
How to create BitBucket Repo and BitBucket CI/CD Pipelines? - Tech Primers

How To Set Up Bitbucket Pipelines for PHP & Node.js

Introduction:

In this tutorial, you will learn how to set up Bitbucket Pipelines for a PHP & Node.js application and deploy them to your Ubuntu 18.04 server.

I’m writing this because I wasn’t able to get a step-by-step guide for how to do it from one source. I had to do some research and use different sources (some of which can be found at the bottom of the article) to achieve what I wanted. Hopefully, this will be of help to someone.

Let’s get to it, shall we?

For starters, let’s define terms:

Continuous Integration(CI) — is a development practice where developers integrate code into a shared repository frequently, preferably several times a day. Each integration can then be verified by an automated build and automated tests.

Continuous Deployment(CD) — is a strategy for software releases wherein any code commit that passes the automated testing phase is automatically released into the production environment, making changes that are visible to the software’s users.

My primary target was deploying an application to my Ubuntu DigitalOcean server (get some credit when you sign up by clicking here. Full disclosure: I get something in return) without having to SSH into it and run the deployment script.

Enable Pipelines in Bitbucket

Go to Your repository Settings ->Under the section of the pipeline, click on Settings -> Click on the Enable Pipelines Switchto enable pipelines.

Pipelines are now enabled.

Next up, we’ll set up SSH keys for our repository.

Set up repository SSH Keys

These are the keys you’ll set up on your production or staging server to enable external logins to your server from bitbucket during the deployment steps which we will discuss later on.

To set these up, got to SSH Keys(still within the Pipelines Section in your repository settings) -> then click on Generate keys. You could use your own keys but there are a couple of sources that have had issues and ended up advise

against it. Generating your own keys is the best alternative.

After that, set up Known Hosts. Enter the IP address of your Ubuntu server as the Host address then click Fetch to generate the host’s fingerprint. After the fingerprint is generated, click on the Add Host button.

Next up, we’ll add the public key from our repository to the authorized_keys file of our Ubuntu server.

Adding public key from Bitbucket Repository to Ubuntu server authorized_keys

Login to your Ubuntu server. It’s important to note that SSH login should be enabled for your server. If you haven’t enabled it, kindly follow the steps outlined here.

Permanently add private key identity to the authentication agent

To prevent entering the passphrase to your private key while pulling your Bitbucket repositories to your servers, you need to persist your identity using ssh-add. By default, the identity is lost every time you log out hence the need to persist it. This is important if your key has a passphrase set up. If not, you can ignore this part.

Open up your .bashrc using the following command.

Copy the following contents to the bottom of your .bashrc file.

This keeps the ssh-agent running even when you log out of your server and persists your private key identity. To get it going, however, you’ll need to log out and log in to your server. Once you log in, the SSH Agent will be initialized and you’ll be requested to add the passphrase to your private key.

Set up a deployment script on your server

This is the script that we will run to deploy your application. It is basically a list of the commands you use to deploy your application on your server.

Run the following commands to set up an executable deployment script.

A sample PHP application deployment script:

A sample Node.js application deployment script:

Setting up this script is important as bitbucket will run it once it’s logged into your server.

Once you are done with that, we can now set up the bitbucket-pipelines.yml.

Setting up bitbucket-pipelines.yml

This is the file that defines your build, test and deployment configurations. It can be configured per branch i.e. what tests to run when some code is pushed to master and where it will be deployed. If you are using a staging server, you can set up the server details separate from the production server details.

Bitbucket has made it easy to set this up by providing templates based on the type of application you are running.

  1. Go to the Pipelines menu item on your left to proceed.
  2. Choose a language template. In this case, PHP or Node.js.
  3. Set up a YML file.
  4. Click on Save. This will commit to your branch and create a new pipeline based on your YML file.

*Note: These are sample yml files. In your script section, you can run the commands necessary for your application.

Setting up bitbucket-pipelines.yml file for our PHP application

This script showcases how you can deploy changes based on different branches.

Setting up bitbucket-pipelines.yml file for your Node.js application

Conclusion

By following this tutorial, you will be able to deploy your applications to your Ubuntu server.

Bitbucket allocates 50 free build minutes per account so keep that in mind as you write your YML files. The builds won’t run if you are past your allocated minutes. Keep the YML files short and sweet.

Thank you for reading.

If you find any errors, have any issues or questions don’t hesitate to reach out via the comment section.

Sources:

READ: Setting Expectations For Software Engineering Teams

Avatar

Written by

Nicholas Kimuli

Software Engineer at Andela

Sours: https://andela.com/insights/continuous-deploymentcd-using-bitbucket-pipelines-and-ubuntu-server/

Pipeline documentation bitbucket

Configure bitbucket-pipelines.yml

The bitbucket-pipelines.yml file defines your Pipelines builds configuration. If you're new to Pipelines, refer to the Get started with Bitbucket Pipelines doc for more information. 

Basic configuration 

With a basic configuration, you can do things like write scripts to build and deploy your projects and configure caches to speed up builds. You can also specify different images for each step to manage different dependencies across actions you are performing in your pipeline.

A pipeline is made up of a list of steps, and you can define multiple pipelines in the configuration file. In the following diagram provided below, you can see a pipeline configured under the default section. The pipeline configuration file can have multiple sections identified by particular keywords.

diagram displaying the structure of a yml file in relation to your pipeline

Before you begin

  • The file must contain at least one pipeline section consisting of at least one step and one script inside the step.

  • Each step has 4 GB of memory available.

  • A single pipeline can have up to 100 steps.

  • Each step in your pipeline runs a separate Docker container. If you want, you can use different types of containers for each step by selecting different images.

Steps

1. To configure the yaml file, in Bitbucket go to your repository and select Pipelines from the left navigation bar. Alternatively, you can configure your yaml file without using Bitbucket's interface.

2. Choose a language.

Note: Pipelines can be configured for building or deploying projects written in any language. Language guides

3. Choose an image.

The file must at least contain one pipeline section consisting of at least one step and one script inside the step.

Descriptions of each section outlined in the above diagram


default

Contains the pipeline definition for all branches that don't match a pipeline definition in other sections.

The default pipeline runs on every push to the repository unless a branch-specific pipeline is defined. You can define a branch pipeline in the branches section.

Note: The default pipeline doesn't run on tags or bookmarks.


branches

Defines a section for all branch-specific build pipelines. The names or expressions in this section are matched against branches in your Git repository.

See Branch workflows for more information about configuring pipelines to build specific branches in your repository.

 Check out the glob patterns cheat sheet to define the branch names.


tags

Defines all tag-specific build pipelines. The names or expressions in this section are matched against tags and annotated tags in your Git repository.

 Check out the glob patterns to define your tags.


bookmarks

Defines all bookmark-specific build pipelines.

Check out the glob patterns cheat sheet to define your bookmarks.


pull requests

A special pipeline that only runs on pull requests initiated from within your repository. It merges the destination branch into your working branch before it runs. Pull requests from a forked repository don't trigger the pipeline. If the merge fails, the pipeline stops.

Pull request pipelines run in addition to any branch and default pipelines that are defined, so if the definitions overlap you may get 2 pipelines running at the same time.

If you already have branches in your configuration, and you want them all to only run on pull requests, replace the keyword branches with pull-requests.

Example

Check out the glob patterns cheat sheet to define the pull-requests.


runs-on

To use your runner in Pipelines, add a runs-on parameter to a step, and it will run on the next available runner that has all the required labels. If all matching runners are busy, your step will wait until one becomes available again. If you don’t have any online runners in your repository that match all labels, the step will fail.

Example

custom

Defines pipelines that can only be triggered manually or scheduled from the Bitbucket Cloud interface.

Example

For more information, see Run pipelines manually.

Example


Advanced configuration

Use the advanced options for running services and running tests in parallel. You can also do things such as configuring a manual step and setting a maximum time for each step, configure 2x steps to get 8 GB of memory.

Before you begin

  • A pipeline YAML file must have at least one section with a keyword and one or more steps.

  • Each step has 4 GB of memory available.

  • A single pipeline can have up to 100 steps.

  • Each step in your pipeline runs a separate Docker container. If you want, you can use different types of containers for each step by selecting different images.

Global configuration options

diagram of how a yaml file's global configuration options
diagram showing an optional step's configuration

List of keywords associated with the global configuration options and their descriptions


variables

[Custom pipelines only] Contains variables that are supplied when a pipeline is launched. To enable the variables, define them under the custom pipeline that you want to enter when you run the pipeline:

Example

Then, when you run a custom pipeline by accessing Branches ⋯ Run pipeline for a branch > Custom, you can set the variable values to run your custom pipeline.

The keyword variables can also be part of the definition of a service.

name

When the keyword name is in the variables section of your yaml, it defines variables that you can add or update when running a custom pipeline. Pipelines can use the keyword inside a step.

default

When the keyword default is in the variables section of your yaml, it defines a default variable value. The default variables are used:

parallel

Parallel steps enable you to build and test faster, by running a set of steps at the same time. The total number of build minutes used by a pipeline will not change if you make the steps parallel, but you'll be able to see the results sooner.

The total number of steps you can have in a Pipeline definition is limited to 100, regardless of whether they are running in parallel or serial.

Indent the steps to define which steps run concurrently:

Example

Learn more about parallel steps.

step

Defines a build execution unit. Steps are executed in the order that they appear in the bitbucket-pipelines.yml file. A single pipeline can have up to 100 steps.

Each step in your pipeline will start a separate Docker container to run the commands configured in the script. Each step can be configured to:

  • Use a different Docker image.

  • Configure a custom max-time.

  • Use specific caches and services.

  • Produce artifacts that subsequent steps can consume.

  • You can have a clone section here.

Steps can be configured to wait for a manual trigger before running. To define a step as manual, add trigger: manual to the step in your bitbucket-pipelines.yml file. Manual steps:

  • It can only be executed in the order that they are configured. You cannot skip a manual step.

  • It can only be executed if the previous step has successfully completed.

  • It can only be triggered by users with write access to the repository.

  • Are triggered through the Pipelines web interface.

If your build uses both manual steps and artifacts, the artifacts are stored for 14 days following the execution of the step that produced them. After this time, the artifacts expire and any manual steps in the pipeline can no longer be executed.

Note: You can't configure the first step of a pipeline as a manual step.

name

Defines a name for a step to make it easier to see what each step is doing in the display.

image

Bitbucket Pipelines uses Docker containers to run your builds.

  • You can use the default image (atlassian/default-image:2) provided by Bitbucket or define a custom image. You can specify any public or private Docker image that isn't hosted on a private network.

  • You can define images at the global or step level. You can't define an image at the branch level.

To specify an image, use image: <your_account/repository_details>:<tag>

For more information about using and creating images, see Use Docker images as build environments.

Example

trigger

Specifies whether a step will run automatically or only after someone manually triggers it. You can define the trigger type as manual or automatic. If the trigger type is not defined, the step defaults to running automatically. The first step cannot be manual. If you want to have a whole pipeline only run from a manual trigger then use a custom pipeline.

Example

deployment

Sets the type of environment for your deployment step, and it is used in the Deployments dashboard. The Valid values are: test, staging, or production.

The following step will display in the test environment in the Deployments view:

Valid values are: test, staging, or production.

Example

size

You can allocate additional memory to a step, or to the whole pipeline. By specifying the size of 2x, you'll have double the memory available, for example 4 GB memory → 8 GB memory.

At this time, valid sizes are 1x and 2x.

2x pipelines will use twice the number of build minutes.

Example: Overriding the size of a single step

script

Contains a list of commands that are executed in sequence. Scripts are executed in the order in which they appear in a step. We recommend that you move large scripts to a separate script file and call it from the bitbucket-pipelines.yml.

pipe

Pipes make complex tasks easier, by doing a lot of the work behind the scenes. This means you can just select which pipe you want to use, and supply the necessary variables. You can look at the repository for the pipe to see what commands it is running. Learn more about pipes.

A pipe to send a message to Opsgenie might look like the following example:

You can also create your own pipes. If you do, you can specify a docker based pipe with the syntax:

after-script

Commands inside an after-script section will run when the step succeeds or fails. This could be useful for clean up commands, test coverage, notifications, or rollbacks you might want to run, especially if your after-script uses the value of BITBUCKET_EXIT_CODE.

Note: If any commands in the after-script section fail:

  • we won't run any more commands in that section

  • it will not affect the reported status of the step.

Example

artifacts

Defines files that are produced by a step, such as reports and JAR files, that you want to share with a following step.

Artifacts can be defined using glob patterns.

Example

For more information, see using artifacts in steps.

options

Contains global settings that apply to all your pipelines. The main keyword you'd use here is max-time.

max-time

You can define the maximum amount of minutes a step can execute at a global level or at a step level. Use a whole number greater than 0 and less than 120.

Example

If you don't specify a max-time, it defaults to 120.

clone

Contains settings for when we clone your repository into a container. Settings here include:

  • LFS - Support for Git LFS

  • depth - the depth of the Git clone.

  • Setting  enabled setting to false will disable git clones.

oidc

Enables the use of OpenID Connect with Pipelines and your resource server. The oidc value must be set to true to set up and configure OpenID Connect.

Example

LFS (GIT only)

Enables the download of LFS files in your clone. If defaults to false if not specified. Note that the keyword is supported only for Git repositories.

Example

depth (Git only)

Defines the depth of Git clones for all pipelines. Note that keyword is supported only for Git repositories.

Use a whole number greater than zero to specify the depth. Use full for a full clone. If you don't specify the Git clone depth, it defaults to 50.

Example

enabled

Setting enabled setting to false will disable git clones.

Example

condition

This allows steps to be executed only when a condition or rule is satisfied. Currently, the only condition supported is changesets. Use changesets to execute a step only if one of the modified files matches the expression in includePaths.

Changes that are taken into account:

In a pull-request pipeline, all commits are taken into account, and if you provide an includePathlist of patterns, a step will be executed when at least one commit change matches one of the conditions. The format for pattern matching follows the glob patterns.

Example
In the following example, the step1 will only execute if the commit that triggered the pipeline include changes in XML files inside the path1 directory or any file in the nested directory structure under path2.

If the files have no changes, the step is skipped and the pipeline succeeds.

For other types of pipelines, only the last commit is considered. It should be fine for pull request merge commits in your default main branch for example, but if you push multiple commits to branch at the same time or if you push multiple times to given branch you might experience non-intuitive behavior when failing pipelines turn green only because the failing step is skipped on the next run.

Conditions and merge checks

If a successful build result is among your pull request merge checks, be aware that conditions on the steps can produce false-positives for branch pipelines. If build result consistency is paramount, consider avoiding conditions entirely or use pull-request pipelines only.

definitions

Define resources used elsewhere in your pipeline configuration. Resources can include:

services

Pipelines can spin up separate docker containers for services, which results in faster builds, and easy service editing.

Example of a fully configured service
If you want a MySQL service container (a blank database available on localhost:3306 with a default database pipelines, user root, and password let_me_in) you could add:

Learn more about how to use services.

caches

Re-downloading dependencies from the internet for each step of a build can take a lot of time. Using a cache they are downloaded once to our servers and then locally loaded into the build each time.

Example


YAML anchors

YAML anchors - a way to define a chunk of your yaml for easy re-use - see YAML anchors.

Sours: https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/
Deploy a website to FTP server (First look at BitBucket Pipelines, part 4)

Pipeline triggers

Bitbucket allows you to run multiple pipelines by triggering them manually or by scheduling the trigger at a given time.

Tutorial

In the following tutorial, you’ll learn how to trigger manual pipelines or how to schedule triggers.

Before you begin

Triggers in Pipelines have the following limitations:


Run pipelines steps manually

Manual steps allow you to customize your CI/CD pipeline by making some steps run only if they are manually triggered. This is useful for items such as deployment steps, where manual testing or checks are required before the step runs.

Configure a manual step by adding trigger: manual  to the step in your bitbucket-pipelines.yml file.

Since a pipeline is triggered on a commit, you can't make the first step manual. If you'd like a pipeline to only run manually, you can set up a custom pipeline instead. Another advantage of a custom pipeline is that you can temporarily add or update values for your variables, for example, to add a version number, or supply a single-use value.

Set a manual pipelines

Any existing pipeline can also be manually run against a specific commit, or as a scheduled build.

If you want a pipeline to only run manually then use a custom pipeline. Custom pipelines donot run automatically on a commit to a branch. To define a custom pipeline, add the pipeline configuration in the  custom  section of your bitbucket-pipelines.yml file. Pipelines which are not defined as a custom pipeline will also run automatically when a push to the branch occurs.

You'll need write permission on the repository to run a pipeline manually, and you can trigger it from the Bitbucket Cloud UI.

Steps

  1. Add a pipeline to the bitbucket-pipelines.yml file. You can manually trigger a build for any pipeline build configuration included in your  bitbucket-pipelines.yml file

Example:

  1. Trigger the pipeline from Bitbucket Cloud. Pipelines can be triggered manually from either the Branches view or the Commits view in the Bitbucket Cloud interface.

Run a pipeline manually from the Branches view

  1. In Bitbucket, choose a repo and go to Branches.

  2. Choose the branch you want to run a pipeline for.

  3. Click (...) , and select Run pipeline for a branch.

  4. Choose a pipeline and click Run:

Run a pipeline manually from the Commits view

  1. In Bitbucket, choose a repo and go to Commits.

  2. Go to the Commits' view for a commit.

  3. Select a commit hash.

  4. Select Run pipeline.

  5. Choose a pipeline, then click Run:

Run a pipeline manually from the Pipelines page

  1. In Bitbucket, choose a repo and go to Pipelines.

  2. Click Run pipeline

  3. Choose branch, a pipeline, and click Run

Additionally, you can run custom pipelines manually parsing variables to the pipeline.
To enable the variables, define them under the custom pipeline that you want to enter when you run the pipeline.

Example:

Then, when you run a custom pipeline by going to Branches ⋯ Run pipeline for a branch > Custom.

var

On schedule

Scheduled pipelines allow you to run a pipeline at hourly, daily or weekly intervals.

  • Scheduled pipelines run in addition to any builds triggered by commits, or triggered manually.

  • You can create a schedule for any pipeline defined in your bitbucket-pipelines.yml file.

  • If you make a custom pipeline it will only run when scheduled or manually triggered.

Steps

Create a pipeline

  1. Here's a simple example showing how you would define a custom pipeline in the bitbucket-pipelines.yml file.

Example:

Read more about custom pipelines.

Create a schedule for your pipeline

  1. Go to the repository in Bitbucket.

  2. Click Pipelines then Schedules (at the top right), and then click New schedule.

  3. Choose the Branch and Pipeline that you want to schedule:

    • The schedule will run the HEAD commit of the branch.

    • The pipeline must be defined in the bitbucket-pipelines.yml on the branch you selected.  

  4. Set the Schedule:

    • Select how often you would like the pipeline to run (hourly, daily or weekly)

    • Select the time (in your local time). However, your pipeline will be scheduled in UTC time (unaffected by daylight savings hours)

    • The scheduled pipeline can run at any time in the selected time period. This is to distribute all schedules in Pipelines triggering across the hour.

Remove a schedule

Go to Pipelines > Schedules (at the top right-hand side of the screen) to see all the schedules for a repository.

  • Remove a schedule by using the 'trash' icon at the right of the schedule.

  • Note that schedules created using the API are displayed as a Cron expression (such as 0 10 15 * *).

Branch workflows

You can change what your pipeline does depending on which branch you push to. All you need is some branch-specific configuration in your bitbucket-pipelines.yml file.

See also Configure bitbucket-pipelines.yml.

Example:

That example shows two branches based on the main branch:

  • a branch called feature/BB-123-fix-links that is a feature branch

  • a branch called experimental where your team can go crazy innovating and breaking stuff. This branch is not a feature branch.

The same bitbucket-pipelines.yml file lives in the root directory of each branch. On each push to a branch, Pipelines executes the scripts assigned to that branch in the bitbucket-pipelines.yml file where:

  • main step definition contains instructions that run on a commit to main

  • feature/* definition contains instructions that run on a commit to any feature branch (that's our BB-123-fix-links branch)

  • default definition contains instructions that run on a commit to any branch that is not main or feature (that's our experimental branch)

Note that the branch pipelines are triggered only if the bitbucket-pipelines.yml file requirements for a branch are met.

If you ever want to push a commit and skip triggering its pipeline, you can add [skip ci] or [ci skip] to the commit message.

Keywords

Default branches: Contains the pipeline definition for all branches that don't match a pipeline definition in other sections. The default pipeline runs on every push to the repository unless a branch-specific pipeline is defined. You can define a branch pipeline in the branches section.

Note: The default pipeline doesn't run on tags or bookmarks.

Example:

tags: Defines all tag-specific build pipelines. The names or expressions in this section are matched against tags and annotated tags in your Git repository.

Example:

This tag triggers a pipeline when a tag starting with “release-” is pushed.

pull-request: A special pipeline that only runs on pull requests initiated from within your repo. It merges the destination branch into your working branch before it runs. Pull requests from a forked repository don't trigger the pipeline. If the merge fails, the pipeline stops.

Pull request pipelines run in addition to any branch and default pipelines that are defined, so if the definitions overlap you may get 2 pipelines running at the same time.

If you already have branches in your configuration, and you want them all to only run on pull requests, replace the keyword branches with pull-requests.

Example:

custom: Defines pipelines that can only be triggered manually or scheduled from the Bitbucket Cloud interface.

Example:

With a configuration like the one above, you should see the following pipelines in the Run pipeline dialog in Bitbucket Cloud:

Example:

variables [Custom pipelines only] Contains variables that are supplied when a pipeline is launched. To enable the variables, define them under the custom pipeline that you want to enter when you run the pipeline:

Example:

Then, when you run a custom pipeline (Branches ⋯ Run pipeline for a branch > Custom:..) you'll be able to fill them in.

The keyword variables can also be part of the definition of a service.

bookmarks: Defines all bookmark-specific build pipelines. The names or expressions in this section are matched against bookmarks in your Mercurial repository.

Example:

Glob patterns

Glob patterns don't allow any expression to start with a star. Every expression that starts with a star needs to be put in quotes.

feature/*

  • Matches with feature/<any_branch>.

  • The glob pattern doesn't match the slash ( /), so Git branches like feature/<any_branch>/<my_branch> are not matched for feature/*.

feature/bb-123-fix-links

  • If you specify the exact name of a branch, a tag, or a bookmark, the pipeline defined for the specific branch overrides any more generic expressions that would match that branch. For example, let's say you specify a pipeline for feature/* and feature/bb-123-fix-links. On a commit to the feature/bb-123-fix-links branch, Pipelines executes the steps defined for feature/bb-123-fix-links and won't execute the steps defined in the feature/*.

' * '

  • Matches all branches, tags, or bookmarks. The star symbol ( * ) must be between single quotes.

  • This glob pattern doesn't match the slash (/ ), so Git branches like feature/bb-123-fix-links are not matched for '*'. If you need the slash to match, use '**' instead of '*'.

' ** '

  • Matches all branches, tags, or bookmarks. For example, it includes branches with the slash ( /) like feature/bb-123-fix-links. The ** expression must be in quotes.

' */feature '

  • This expression requires quotes.

' main ' and duplicate branch names

  • Names in quotes are treated the same way as names without quotes. For example, Pipelines sees main and ' main ' as the same branch names.

  • In the situation described above, Pipelines will match only against one name (main or ' main', never both).

  • Try to avoid duplicating names in your bitbucket-pipelines.yml file.

Sours: https://support.atlassian.com/bitbucket-cloud/docs/pipeline-triggers/

Now discussing:

HawkScan and Bitbucket Pipelines

Adding StackHawk to Bitbucket Pipelines is simple. In this guide we will describe how to do it with concrete examples you can try yourself. We will describe three scenarios:

  1. Scan a publicly available endpoint such as example.com
  2. Scan a service running on localhost
  3. Scan an application stack running locally in Docker Compose

Create a Bitbucket Repository

If you don’t have one already, create a new Bitbucket account. Then create a new repository to contain the configurations for the examples below.

Secure Your API Key

Your API key should be kept secret. Rather than saving it in your repository, store it as a secret environment variable in Bitbucket Pipelines.

Log on to Bitbucket and navigate to your repository. From the left-hand pane , select (⚙️) Repository settings, and then below PIPELINES, select Repository variables.

Repository Settings ScreenshotRepository Variables Screenshot

Add an environment variable called , and enter your API key from the HawkScan app. If you need to look up your API key or create a new one, navigate to your API Keys in the StackHawk platform.


Now you’re ready to define your scan and pipeline configurations.

Scenario One: External Site Scanning

In this scenario you will scan an existing external site. Typically, this would be your own integration test site running your latest pre-production code. For this simple example we will use example.com as our external site.

At the base of your repository, create a file with the following contents.

bitbucket-pipelines.yml

This is a single step pipeline that enables Docker, and then runs HawkScan as a Docker container. The Docker command to run HawkScan is a little long, so we break it up into multiple lines using the YAML special character to fold the following lines into one long line, removing any newline characters.

Notice that we pass the API key to HawkScan using the flag in the command.

Next, add a HawkScan configuration file to the root of your repository like so.

stackhawk.yml

Set to the StackHawk application ID for your application. You can find it in StackHawk Applications.

Commit your code and push it to Bitbucket to initiate a pipeline run. You can watch your scan progress in Bitbucket, and check the StackHawk Scans console to see your results.

Scenario Two: Localhost Scanning

In this scenario we will start a service locally and scan it on the localhost address. You can use this approach to scan your own application within the Bitbucket Pipelines build environment.

For this scenario, you will fire up Nginx in a Docker container, and scan it at the localhost address. Any application running on the localhost address can be scanned. It doesn’t need to be in a Docker container!

At the base of your repository, create a file with the following contents.

bitbucket-pipelines.yml

The script in this pipeline has three steps:

  1. start an Nginx container, listening on localhost port 8080
  2. wait for Nginx to become responsive at http://localhost:8080
  3. run HawkScan.

Notice the flag to the HawkScan Docker command line that reads, “.” This allows HawkScan to reach services on the localhost address, such as the Nginx container started in step 1 of the script. See this Bitbucket security bulletin for more information on why this flag is necessary.

Add a HawkScan configuration to the root of your repository:

stackhawk.yml

Set to the StackHawk application ID for your application. You can find it from the StackHawk Applications screen.

Commit your code and push it to Bitbucket to initiate a pipeline run. You can watch your scan progress in Bitbucket, and check the StackHawk Scans console to see your results.

Scenario Three: Docker Compose Scanning

Docker Compose is a great way to build up a multi-tier application or set of microservices to create a repeatable integration test environment. You can then add HawkScan as an overlay Docker Compose configuration.

For this scenario, we will start up an Nginx container using Docker Compose, and then scan it by overlaying another Docker Compose configuration for the HawkScan container.

Create the following pipeline configuration file in the base of your repository.

bitbucket-pipelines.yml

There are three steps in the script defined above:

  1. Bring up the Docker Compose configuration, which contains the service, .
  2. Wait for the container to become reachable
  3. Add the Docker Compose configuration, which contains the service, .

Notice the flag in the third script step, . This flag tells Docker Compose to bring down the whole environment when HawkScan finishes and the container exits.

Add the Docker Compose configuration file to your repo.

docker-compose-base.yml

The service runs the docker container and listens on localhost port 80. We only listen on localhost so that we can test it with a simple script to make sure it is up and listening before we attempt to scan it. The scan will use the private bridge network set up by Docker Compose to allow container services to communicate with each other by name.

We also set the logging driver for to . Since HawkScan will be probing many URLs on , logging would generate excessive output in your pipeline results.

Next, create a Docker Compose configuration for HawkScan in a file named .

docker-compose-hawkscan.yml

This file creates the service which runs the container . It passes along the environment variable from your secured Repository Variables. And it mounts the current working directory to within the container so that HawkScan can find your HawkScan configuration files.

Add your HawkScan configuration file, , to the base of your repo.

stackhawk.yml

Set to the StackHawk application ID for your application. You can find it from the Applications screen. Notice that the target host is , since that service will be reachable by name within the Docker Compose private bridge network.

Commit your code and push it to Bitbucket to initiate a pipeline run. You can watch your scan progress in Bitbucket, and check the StackHawk Scans console to see your results.

For a more in-depth example of scanning a realistic integration test environment using Docker Compose, see our tutorial, Test-Driven Security With StackHawk Travis CI and Docker Compose. In this tutorial we build, seed, and scan a Django application with an Nginx proxy front-end and PostgreSQL database backend, all in Docker Compose.

Sours: https://docs.stackhawk.com/continuous-integration/bitbucket-pipelines.html


199 200 201 202 203