How to Migrate ‘Text Field (Read-Only)’ Jira Custom Fields to Cloud

This guide explains how to automatically migrate Jira custom fields of type ‘Text Field (Read-Only)’ from Data Center/Server to Cloud using Python and Jira Cloud REST API. It shows one of the available methods for transferring custom fields that are not migrated automatically.

Jira custom fields that are not migrated

According to Atlassian’s official documentation, certain custom fields are not automatically migrated to the Cloud using the Jira Cloud Migration Assistant (JCMA). These include:

  • Text field (read only)
  • Issue picker

If your projects use these fields, manual or alternative migration methods are required.

Is the Custom Field Needed on Cloud?

Before migrating a custom field, ensure it is actively used and necessary in your Jira Cloud environment. You can validate its usage by:

  • Check Screen Associations – Start by navigating to the Custom Fields section and checking if the field is associated with any screens. A field without screen associations may still be used in other contexts, so this is not definitive but is a good starting point.
  • Run a Database Query – Use a database query (provided in this article) to identify how many issues contain values for the field. This provides a clear picture of its actual usage.
  • Use a JQL Query – If you have permission to view all issues in the Jira instance, use a JQL query to check the field’s usage. For example, if the field is named Letter, you can run:
Letter IS NOT EMPTY 
  • Consult with Users – Talk to users to determine if the field is part of their filters, reports, or workflows.

Database Query

Once you’ve confirmed that the custom field is needed in Jira Cloud, you’ll need to run a database query to extract the field data. This data will be used by the Python script to update issues with the custom field values in Jira Cloud. Use the following SQL query:

SELECT distinct p.pkey, i.issuenum, c.stringvalue from customfieldvalue c, jiraissue i, project p where c.customfield in (12345) and c.issue = i.id and i.project = p.id order by 1, 2

Instructions:

  • Replace 12345 with the actual ID of your custom field from the Jira Data Center or Server instance.
  • After executing the query, export the results to a CSV file.
  • Ensure the CSV delimiter is a comma (,). If a different delimiter, such as a semicolon (;), is used, you must update the script.

Steps on Jira Cloud

  1. Create a custom field
    • Before running the script, create the corresponding custom field in Jira Cloud. Ensure:
      • The field type is Text Field (Read-Only).
      • The field is associated with the same screens as in your Jira Data Center/Server instance.
  2. Verify Permissions
    • Confirm that you have the necessary permissions to update all issues in Jira Cloud. This includes:
      • Edit Issue permissions for all projects.
      • Access to issues that may have restricted visibility (e.g., security levels or specific user permissions).

How the script works?

As mentioned earlier, the script reads values from the exported CSV file and updates issues in Jira Cloud using the Jira Cloud REST API.

Key points to note:

  • Error Logging: The script logs any errors to a specified log file, allowing you to review and troubleshoot why certain issues were not updated.
  • Testing: It is recommended to test the script on test Cloud instance to check why some issues weren’t updated and also to avoid any unexpected changes.

Script:

import requests
from requests.auth import HTTPBasicAuth
import json
import csv
from datetime import datetime
import time

site = "your_site"
email = ""
api_token = ""

auth = HTTPBasicAuth(email, api_token)

FILE = "field_values.csv"
LOGGING = "logging.txt"


def update_field(key: str, fieldValue: str):

    update_issue_url = f"https://{site}.atlassian.net/rest/api/3/issue/{key}?notifyUsers=false"

    put_headers = {
        "Accept": "application/json",
        "Content-Type": "application/json"
    }

    payload = json.dumps({
        "fields": {
            "customfield_12345": fieldValue // replace 12345 with actual custom field Cloud ID
        }
    })

    try:
        put_response = requests.request(
            "PUT",
            update_issue_url,
            data=payload,
            headers=put_headers,
            auth=auth
        )

        if put_response.status_code == 200 or put_response.status_code == 204:
            print(f"Issue {key} updated successfully!")
        elif put_response.status_code == 404:
            print(f"Issue {key} doesn't exist on Cloud!")
        else:
            print(f"Issue {key} wasn't updated, status code {put_response.status_code}. Response: {put_response.text}")
            output_file.write(f"Issue {key} wasn't updated, status code {put_response.status_code}. Response: {put_response.text}")
    except requests.exceptions.RequestException as e:
        print(f"An error occurred while updating issue {key}: {e}")
        output_file.write(f"An error occurred while updating issue {key}: {e}")


output_file = open(LOGGING, "w")
output_file.write("Starting proces at {}.\n".format(datetime.now().strftime("%d/%m/%Y %H:%M:%S")))
with open(FILE) as input_file:
    csv_reader = csv.reader(input_file, delimiter=",")
    for row in csv_reader:
        if row[0].__contains__("PKEY"):
            print("Skipping first line!")
        else:
            issue_key = row[0].strip() + "-" + row[1].strip()
            field_value = row[2].strip()
            update_field(issue_key, field_value)
output_file.write("Finished proces at {}.\n".format(datetime.now().strftime("%d/%m/%Y %H:%M:%S")))
output_file.close

Keep in mind

  • Script Execution Time – The script’s execution time will vary based on the number of issues being updated. If there are many issues, the process may take a significant amount of time. Consider breaking the data into smaller chunks and running more scripts at the same time or running the script during off-peak hours to minimize impact on performance.
  • Test First – Always test the script on test instance before running it on production. You can also test it on smaller chunk of issues before running it for all issues.

Reach out!

I hope this article was helpful to you! If you have any additional questions, want to dive deeper into the topic, or have ideas for improvement, I’d love to hear from you.

You can find links to my LinkedIn profile and email at the bottom of the page, or feel free to reach out via the Contact page.