Fast Healthcare Interoperability Resources (FHIR) is an important standard that was created by the Health Level Seven International, which is a health-care standards organization. It describes data formats and elements and an application programming interface for exchanging electronic health records (EHR).
Earlier this year, Oracle’s Cloud Venture team had the opportunity to partner with James Agnew and his team from Smile CDR, a leader in the world of FHIR and whom largest customers run on Oracle database.
Smile CDR already has industry leading features, so the goal was to execute a performance benchmark using some of the latest offerings of Oracle Cloud Infrastructure (OCI) and take existing data points to new levels. The team deployed Oracle’s Autonomous Database with its auto-scaling enabled, Kubernetes and Oracle Linux.
The results exceeded any known FHIR benchmark, on any cloud. Overall throughput for a collection of roughly 1 million patient records on OCI was 22,251 resources per second. For comparison, prior benchmarks on other clouds include 11,717 resources per second.
While we are pleased with these results, we had plenty of room to grow and look forward to working with the Smile CDR team so we can continue to deliver industry leading scaling and performance for enterprises.
Below is a bit more on the methodology and results from the benchmark exercise.
Environment
The architecture used for this test is shown in the diagram below.

Database Environment Specifications
Autonomous Database
OCPU count: 16
OCPU auto scaling: Enabled
Storage: 5 TB
Storage auto scaling: Enabled(Allocated storage: 2.828 TB)
Database Version: 19c
Autonomous Database Network
Access Type: Allow secure access from specific IPs and VCNs
Access Control List: Enabled
Mutual TLS (mTLS) Authentication: Not Required
Kubernetes Environment Specifications
Node Pool
Node Pools: 1
Kubernetes Version: v1.21.5
Image Name: Oracle-Linux-7.9-2022.01.24-0
Shape: VM.Standard.E4.Flex
Total Worker Nodes: 10
Amount of Memory (GB): 32
OCPUs: 8
Node details
Shape: VM.Standard.E4.Flex
OCPU count: 8
Network bandwidth (Gbps): 8
Memory (GB): 32
Local disk: Block storage only
Node Instance’s Boot Volume details
Size: 47 GB
Performance
Target Performance:Balanced (VPU/GB:10)
Target IOPS: 2820 IOPS
Target Throughput: 22.56 MB/s
Cluster’s Load Balancer Information
Shape: Flexible
Min Bandwidth: 50 Mbps
Max Bandwidth: 1000 Mbps
Test 1: Data Ingestion
In order to demonstrate mass data ingestion, a collection of roughly 1 million patient records was generated using Synthea, 2.182 TiB of input data.
The FHIR $import (Bulk Import / Bulk Access Implementation Guide) operation was used to ingest the initial data.
This test produced the following results:
- Total Time Elapsed to ingest 2.182 TiB of data: 12.7h
- Overall Throughput: 22251.1 resources / second
- Time to ingest 1 TiB of data (mins): 349 mins

Test 2: Online Transactional Processing
The next test simulated operational use of the data in the system. For this test, an increasing number of concurrent users were added. Each user performed an equal number of the following operations. Operations were performed concurrently with read and write operations occurring at the same time.
The source for this test can be found in the class Test06_MixedBag in the following repository: https://github.com/jamesagnew/fhir-server-performance-test-suite/
The operations included in this test were:
- A FHIR search for resources belonging to a given patient
- A FHIR read for a specific resource by ID
- A FHIR update on an existing individual resource
- A FHIR create for a new individual resource
This test produced the following results:
Concurrent Users | Min Transaction Time (ms) | Mean Transaction Time (ms) | 75th Percentile Transaction Time (ms) | 99th Percentile Transaction Time (ms) |
---|---|---|---|---|
10 | 9 | 55.9 | 44 | 428 |
50 | 8 | 49.7 | 34 | 511 |
100 | 8 | 60.4 | 58 | 559 |
200 | 8 | 120.6 | 79 | 944 |
300 | 7 | 220.8 | 97 | 2153 |
400 | 8 | 325.9 | 172 | 3302 |
500 | 8 | 613.4 | 351 | 4728 |
The mean transaction time is shown in the following chart:

The 99th percentile transaction time is shown in the following chart:

Special thanks to all those that supported this effort, including James Agnew, Dr. Sarah Matt, Nithisha Javadi, Abel Bacchus, Adam Cole, Peter Cavanaugh, Jay Jackson, Sanjay Rahane, Shaffiq Ladak and many others.