Skip to content

Lock API

Primatomic provides robust distributed synchronization primitives including exclusive locks (mutexes) and read/write locks. All lock operations are serialized through the Raft consensus log, ensuring consistent ordering and fairness across the entire cluster.

  • Distributed Mutual Exclusion: Guaranteed exclusive access across multiple nodes
  • Read/Write Locks: Concurrent readers with exclusive writers for optimized performance
  • Automatic TTL Cleanup: Prevent deadlocks from crashed clients with time-to-live expiration
  • Fair Queuing: FIFO ordering for waiting clients prevents starvation
  • Blocking Operations: Optional timeouts for waiting on lock availability
  • Crash Recovery: Locks automatically released when client connections are lost

Creates or retrieves a named lock. Creating an existing lock is not an error and returns the existing lock identifier. Locks may expire automatically if a ttl_ms is supplied.

message CreateLockRequest {
string key = 1;
string namespace = 2;
optional uint64 ttl_ms = 3; // Time to live in milliseconds
}
message CreateLockResponse {
string key = 1;
string namespace = 2;
Status status = 3;
}

Attempts to acquire exclusive ownership of a lock. A timeout in milliseconds can be specified. If the lock is held by another client the request blocks until the lock becomes available or the timeout is reached.

message AcquireLockRequest {
string key = 1;
string namespace = 2;
optional uint64 timeout_ms = 3; // Time to wait for the lock in milliseconds
}
message AcquireLockResponse {
string key = 1;
string namespace = 2;
bool acquired = 3;
Status status = 4;
}

Releases ownership of the lock. Clients that do not hold the lock receive a FAILED_PRECONDITION error.

message ReleaseLockRequest {
string key = 1;
string namespace = 2;
}
message ReleaseLockResponse {
string key = 1;
string namespace = 2;
bool released = 3;
Status status = 4;
}

Removes a lock entry without validating the owner. Useful for clearing stale locks.

ForceReleaseLockRequest {
namespace: string
name: string
}

Returns whether a lock exists and optional metadata associated with it.

DescribeLockRequest {
namespace: string
name: string
}

Use the create_rw_lock, acquire_rw_lock and release_rw_lock RPCs to synchronize readers and writers. Multiple readers may hold the lock at once but writers have exclusive access. The create_rw_lock call accepts an optional ttl_ms similar to create_lock.

AcquireRWLockRequest {
namespace: string
name: string
write: bool
timeout_ms: uint64
}
import primatomic
import time
client = primatomic.Client("your-workspace.primatomic.com:443",
jwt_token="your-api-token")
# Basic lock usage with context manager
with client.lock("myapp", "resource_lock"):
# Critical section - only one client can execute this at a time
print("Performing exclusive operation...")
time.sleep(1)
print("Operation complete")
# Manual lock management with timeout
lock = client.lock("myapp", "batch_job")
if lock.acquire(timeout_ms=5000):
try:
# Perform batch processing
process_batch_data()
finally:
await client.release_lock("myapp", "batch_job")
except TimeoutError:
print("Could not acquire lock within 5 seconds")
# Lock with automatic TTL (expires in 30 seconds)
await client.create_lock("myapp", "temp_lock", ttl_ms=30000)
# Multiple readers can access concurrently
async with client.rw_lock("myapp", "data_cache", write=False):
# Read operation - multiple clients can execute simultaneously
data = read_cached_data()
process_read_only(data)
# Exclusive writer access
async with client.rw_lock("myapp", "data_cache", write=True):
# Write operation - blocks all readers and other writers
new_data = generate_fresh_data()
update_cache(new_data)
# Manual RW lock with timeout
try:
await client.acquire_rw_lock("myapp", "shared_resource", write=True, timeout_ms=10000)
try:
# Exclusive write access
modify_shared_resource()
finally:
await client.release_rw_lock("myapp", "shared_resource")
except TimeoutError:
print("Could not acquire write lock within 10 seconds")
# Pattern: Resource pool management
async def get_worker_from_pool():
async with client.lock("myapp", "worker_pool"):
worker_id = allocate_next_worker()
if worker_id:
return worker_id
else:
raise ResourceExhausted("No workers available")
# Pattern: Cache coherency with RW locks
async def read_from_cache(key):
async with client.rw_lock("myapp", f"cache:{key}", write=False):
return cache.get(key)
async def update_cache(key, value):
async with client.rw_lock("myapp", f"cache:{key}", write=True):
cache.set(key, value)
await invalidate_related_entries(key)
# Pattern: Leader election
async def try_become_leader():
try:
await client.acquire_lock("myapp", "leader_election", timeout_ms=1000)
# This process is now the leader
return True
except TimeoutError:
# Another process is the leader
return False
# Check lock status
lock_info = await client.describe_lock("myapp", "my_lock")
if lock_info.exists:
print(f"Lock owned by: {lock_info.owner}")
print(f"Expires at: {lock_info.expires_at}")
print(f"Waiting clients: {len(lock_info.waiters)}")
# Force release stuck locks (admin operation)
await client.force_release_lock("myapp", "stuck_lock")

Common lock error conditions:

  • FAILED_PRECONDITION: Attempting to release a lock you don’t own
  • DEADLINE_EXCEEDED: Lock acquisition timed out
  • PERMISSION_DENIED: Invalid credential ID or insufficient namespace access
  • UNAVAILABLE: Cluster is not ready or no leader elected
  • ALREADY_EXISTS: Lock creation failed (rare edge case)
  • Granular Locking: Use specific lock names rather than global locks
  • Consistent Naming: Establish lock naming conventions (resource_type:resource_id)
  • TTL Usage: Always set appropriate TTL to prevent deadlocks from crashed clients
  • Timeout Strategy: Use reasonable timeouts to avoid indefinite blocking
  • Read/Write Preference: Use RW locks when you have read-heavy workloads
  • Lock Scope: Minimize critical section duration
  • Avoid Nested Locks: Prevent deadlock by acquiring locks in consistent order
  • Connection Reuse: Share client connections across lock operations
  • Monitoring: Track lock acquisition times and contention metrics
  • Alerting: Monitor for locks held longer than expected
  • Cleanup: Use force release for administrative cleanup of abandoned locks