Flagship + Cloudflare Worker Integration

This example will guide you to integrate Flagship feature flags with Cloudflare Workers, enabling feature flagging and A/B testing at the edge.

📘

Github Repository

https://github.com/flagship-io/flagship-cloudflare-worker-example

Overview

This guide shows how to:

  • Use KV storage or direct integration for caching bucketing data to improve performance
  • Initialize the Flagship SDK in a Cloudflare Worker
  • Create a visitor object with context data from request headers or any other source
  • Fetch feature flags assigned to this visitor
  • Retrieve specific flag values for use in the application
  • Send analytics data back to Flagship
  • Ensure analytics are sent before the worker terminates

Prerequisites

Setup

  1. Create a Cloudflare Worker project:

Follow this link to setup a CloudFlare worker project

  1. Install dependencies:
yarn add @flagship.io/js-sdk
  1. Configure your Flagship credentials as Cloudflare Worker secrets:
wrangler secret put FLAGSHIP_ENV_ID
wrangler secret put FLAGSHIP_API_KEY
  1. Create a KV namespace for caching if you choose to use KV storage:
wrangler kv:namespace create MY_APP_KV
  1. Update the wrangler.jsonc file with your KV namespace ID

Use KV storage or direct integration for bucketing data

Bucketing data contains information about your Flagship campaigns and variations, allowing the worker to make flag decisions at the edge without calling the Flagship API for every request.

Development Approach

Note: The following approach is appropriate for development and testing environments.

Option 1: KV Storage

  1. Fetch bucketing data directly from the Flagship CDN:
# Replace YOUR_ENV_ID with your Flagship Environment ID
curl -s https://cdn.flagship.io/YOUR_ENV_ID/bucketing.json > bucketing-data.json
  1. Upload the bucketing data to your KV namespace:
wrangler kv:key put --binding=MY_APP_KV "initialBucketing" "$(cat bucketing-data.json)"

Option 2: Direct Integration

For direct integration, you'll need to:

  1. Fetch the bucketing data during your build process
  2. Save it as a JSON file in your project
  3. Import it directly in your worker code
# During build/deployment:
curl -s https://cdn.flagship.io/YOUR_ENV_ID/bucketing.json > src/bucketing-data.json

Then import in your code:

import bucketingData from './bucketing-data.json';
// Use this data when initializing Flagship

Production Approach

For production environments, there are two recommended approaches. Both require setting up webhooks in the Flagship platform that trigger your CI/CD pipeline when campaigns are updated.
Find more details here.

Initialize the Flagship SDK in a Cloudflare Worker

The first step to using Flagship in your Cloudflare Worker is to initialize the SDK. This sets up the connection with your Flagship project and configures how feature flags will be delivered.

To initialize the Flagship SDK in a Cloudflare Worker, you can use either the KV storage approach or the direct integration approach. Both methods allow you to cache bucketing data for improved performance.

With KV Storage

The KV storage approach involves retrieving the bucketing data from Cloudflare KV at runtime:

// Import the Flagship SDK edge bundle optimized for edge environments
import { DecisionMode, Flagship, LogLevel } from '@flagship.io/js-sdk/dist/edge.js';

export default {
	async fetch(request, env, ctx): Promise<Response> {
		// Access Flagship credentials from environment variables
		const { FLAGSHIP_ENV_ID, FLAGSHIP_API_KEY } = env;

		// Retrieve cached bucketing data from Cloudflare KV storage
		const initialBucketing = (await env.MY_APP_KV.get('initialBucketing', 'json')) || {};

		// Initialize Flagship SDK with credentials and configuration
		await Flagship.start(FLAGSHIP_ENV_ID, FLAGSHIP_API_KEY, {
			// Use edge bucketing mode for optimal performance in serverless environments
			decisionMode: DecisionMode.BUCKETING_EDGE,
			// Pass cached bucketing data
			initialBucketing,
			// Defer fetching campaign data until explicitly needed
			fetchNow: false,
			logLevel: LogLevel.DEBUG,
		});

		// Continue with the rest of your worker logic...
	},
};

With Direct Integration

The direct integration approach involves importing the bucketing data directly:

// Import the Flagship SDK edge bundle optimized for edge environments
import { DecisionMode, Flagship, LogLevel } from '@flagship.io/js-sdk/dist/edge.js';
// Import bucketing data directly
import initialBucketing from './bucketing-data.json';

export default {
	async fetch(request, env, ctx): Promise<Response> {
		// Access Flagship credentials from environment variables
		const { FLAGSHIP_ENV_ID, FLAGSHIP_API_KEY } = env;

		// Initialize Flagship SDK with credentials and embedded bucketing data
		await Flagship.start(FLAGSHIP_ENV_ID, FLAGSHIP_API_KEY, {
			// Use edge bucketing mode for optimal performance in serverless environments
			decisionMode: DecisionMode.BUCKETING_EDGE,
			// Use the imported bucketing data
			initialBucketing,
			// Defer fetching campaign data until explicitly needed
			fetchNow: false,
			logLevel: LogLevel.DEBUG,
		});

		// Continue with the rest of your worker logic...
	},
};

Configuration Options

  • decisionMode:

    • BUCKETING_EDGE is recommended for Workers as it makes decisions locally using bucketing data
    • API mode would call Flagship servers for each decision (not recommended for Workers)
  • initialBucketing:

    • Pre-loaded campaign data to make local decisions without API calls
    • Retrieved from KV storage or embedded in your code
  • fetchNow:

    • false Defer fetching campaign data until explicitly needed

Create a visitor object with context data from request headers or any other source

The visitor object represents a user of your application. You need to create one for each request, providing a unique ID and relevant context data that can be used for targeting.

// From the worker fetch handler
const { searchParams } = new URL(request.url);

// Get visitor ID from query params or let SDK generate one
const visitorId = (searchParams.get('visitorId') as string) || undefined;

// Create a visitor with context data extracted from request headers
// This context can be used for targeting rules in Flagship campaigns
const visitor = Flagship.newVisitor({
	visitorId,
	// Set GDPR consent status for data collection
	hasConsented: true,
	context: {
		userAgent: request.headers.get('user-agent') || 'unknown',
		country: request.headers.get('cf-ipcountry') || 'unknown',
		path: request.url,
		referrer: request.headers.get('referer') || 'unknown',
		isPremiumUser: searchParams.get('premium') === 'true',
		// You can add any additional context data that's relevant for your targeting
		// For example:
		// isPremiumUser: searchParams.get('premium') === 'true',
		// deviceType: detectDeviceType(request.headers.get('user-agent')),
	},
});

You can include any information in the context object that might be useful for targeting. Cloudflare Workers provide access to information like country (cf-ipcountry), user agent, and more. Common examples include:

  • Demographics: age, gender, location
  • Technical: device, browser, OS, screen size
  • Behavioral: account type, subscription status
  • Custom: any application-specific attributes

This context is used by Flagship for targeting rules, so include any attributes that might be useful for segmenting your users.

Fetch feature flags assigned to this visitor

Once you have a visitor object, you need to fetch the feature flags assigned to them based on targeting rules:

// Fetch feature flags assigned to this visitor
// This applies all targeting rules based on visitor context
await visitor.fetchFlags();

// ... Continue with the rest of your worker logic

This operation evaluates all campaign rules against the visitor's context and assigns flag variations accordingly. With edge bucketing, this happens locally without any network requests.

Retrieve specific flag values for use in the application

After fetching flags, you can retrieve specific flag values for use in your application. The SDK provides a type-safe way to access flag values with default fallbacks.

// Retrieve specific flag values with default fallbacks if flags aren't defined
const welcomeMessage = visitor.getFlag('welcome_message').getValue('Welcome to our site!');
const isFeatureEnabled = visitor.getFlag('new_feature_enabled').getValue(false);

// You can get different types of values:
// Strings
const title = visitor.getFlag('page_title').getValue('Default Title');

// Numbers
const discountPercent = visitor.getFlag('discount_percentage').getValue(0);

// Objects
const uiConfig = visitor.getFlag('ui_config').getValue({
	theme: 'light',
	showBanner: false,
	menuItems: ['home', 'products', 'contact'],
});

// Arrays
const items = visitor.getFlag('menu_items').getValue(['home', 'about']);

Always provide a default value that matches the expected type. This ensures your application works even if the flag isn't defined or there's an issue fetching flags.

Note: calling getValue automatically activates the flag, meaning it will be counted in the reporting.

Send analytics data back to Flagship

To measure the impact of your feature flags, you need to send analytics data back to Flagship. This includes page views, conversions, transactions, and custom events.

// Send analytics data back to Flagship for campaign reporting
visitor.sendHits([
	{
		type: HitType.PAGE_VIEW,
		documentLocation: request.url,
	},
	// You can send additional hits like events
	// {
	//	type: HitType.EVENT,
	//	category: EventCategory.ACTION_TRACKING,
	//	action: 'feature_view',
	//	label: 'new_feature',
	//	value: isFeatureEnabled ? 1 : 0,
	// },
]);

Analytics data is crucial for measuring the impact of your feature flags in A/B testing scenarios. You can track page views, events, transactions, and more.

Ensure analytics are sent before the worker terminates

Cloudflare Workers can terminate quickly, potentially before analytics data is sent. To prevent this, use waitUntil:

// Ensure analytics are sent before the worker terminates
ctx.waitUntil(Flagship.close());

// Return feature flag values as JSON response
return new Response(
	JSON.stringify({
		message: welcomeMessage,
		features: {
			newFeatureEnabled: isFeatureEnabled,
		},
	}),
	{
		headers: { 'content-type': 'application/json' },
	}
);

This ensures that all pending analytics are sent before the worker terminates, giving you accurate reporting data.

Production Approach to retrieve and update bucketing data

For production environments, there are two recommended approaches. Both require setting up webhooks in the Flagship platform that trigger your CI/CD pipeline when campaigns are updated:

Common Setup for Both Approaches

  1. Set up a webhook in the Flagship Platform that triggers whenever a campaign is updated
  2. Configure the webhook to call your CI/CD pipeline or serverless function

The primary difference between the approaches is where the bucketing data is stored:

Option 1: Webhook + KV Storage

This approach stores bucketing data in Cloudflare KV:

name: Update Flagship Bucketing Data

on:
  repository_dispatch:
    types: [flagship-campaign-updated]

jobs:
  update-bucketing:
    runs-on: ubuntu-latest
    steps:
      - name: Fetch latest bucketing data
        run: |
          curl -s https://cdn.flagship.io/${{ secrets.FLAGSHIP_ENV_ID }}/bucketing.json > bucketing.json

      - name: Update Cloudflare KV
        run: |
          npx wrangler kv:key put --binding=MY_APP_KV "initialBucketing" "$(cat bucketing.json)" \
            --account-id ${{ secrets.CF_ACCOUNT_ID }} \
            --api-token ${{ secrets.CF_API_TOKEN }}

Option 2: Direct Integration via Deployment

This approach embeds bucketing data directly in your worker code:

name: Deploy Worker with Latest Bucketing Data

on:
  repository_dispatch:
    types: [flagship-campaign-updated]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Fetch latest bucketing data
        run: |
          curl -s https://cdn.flagship.io/${{ secrets.FLAGSHIP_ENV_ID }}/bucketing.json > src/bucketing-data.json

      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '18'

      - name: Install dependencies
        run: yarn install

      - name: Deploy to Cloudflare
        run: yarn deploy
        env:
          CLOUDFLARE_API_TOKEN: ${{ secrets.CF_API_TOKEN }}

Trade-offs between approaches:

KV Storage Approach:

  • Performance: Adds KV read latency to each request (typically 5-50ms)
  • Flexibility: Allows updating flags without redeploying code
  • Reliability: If KV is unavailable, flags might not work correctly
  • Cost: Incurs KV read costs for each worker invocation
  • Scalability: KV has usage limits that could be hit with very high traffic
  • Debugging: Easier to inspect current bucketing data separately from code
  • Isolation: Clearer separation between code and configuration

Direct Integration Approach:

  • Performance: Faster initialization with no external calls during startup
  • Deployment: Requires redeployment for each flag configuration change
  • Reliability: Fewer runtime dependencies, more predictable behavior
  • Cost: No KV costs, but more frequent deployments might increase costs
  • Bundle size: Larger worker bundle due to embedded bucketing data
  • Caching: Better cold start performance since data is bundled

When to choose KV approach:

  • When flag configurations change frequently
  • When you need to update flags without touching code
  • When deployment pipelines are slow or restricted
  • When you have complex approval workflows for code changes

When to choose Direct Integration approach:

  • When performance is critical (especially cold start times)
  • When flag configurations change infrequently
  • When simplicity and fewer dependencies are priorities

Choose the approach that best fits your deployment frequency and performance requirements.

Learn More