Lock API
Primatomic provides robust distributed synchronization primitives including exclusive locks (mutexes) and read/write locks. All lock operations are serialized through the Raft consensus log, ensuring consistent ordering and fairness across the entire cluster.
Features
Section titled “Features”- Distributed Mutual Exclusion: Guaranteed exclusive access across multiple nodes
- Read/Write Locks: Concurrent readers with exclusive writers for optimized performance
- Automatic TTL Cleanup: Prevent deadlocks from crashed clients with time-to-live expiration
- Fair Queuing: FIFO ordering for waiting clients prevents starvation
- Blocking Operations: Optional timeouts for waiting on lock availability
- Crash Recovery: Locks automatically released when client connections are lost
create_lock
Section titled “create_lock”Creates or retrieves a named lock. Creating an existing lock is not an error and returns the existing lock identifier. Locks may expire automatically if a ttl_ms
is supplied.
message CreateLockRequest { string key = 1; string namespace = 2; optional uint64 ttl_ms = 3; // Time to live in milliseconds}
message CreateLockResponse { string key = 1; string namespace = 2; Status status = 3;}
acquire_lock
Section titled “acquire_lock”Attempts to acquire exclusive ownership of a lock. A timeout in milliseconds can be specified. If the lock is held by another client the request blocks until the lock becomes available or the timeout is reached.
message AcquireLockRequest { string key = 1; string namespace = 2; optional uint64 timeout_ms = 3; // Time to wait for the lock in milliseconds}
message AcquireLockResponse { string key = 1; string namespace = 2; bool acquired = 3; Status status = 4;}
release_lock
Section titled “release_lock”Releases ownership of the lock. Clients that do not hold the lock receive a FAILED_PRECONDITION
error.
message ReleaseLockRequest { string key = 1; string namespace = 2;}
message ReleaseLockResponse { string key = 1; string namespace = 2; bool released = 3; Status status = 4;}
force_release_lock
Section titled “force_release_lock”Removes a lock entry without validating the owner. Useful for clearing stale locks.
ForceReleaseLockRequest { namespace: string name: string}
describe_lock
Section titled “describe_lock”Returns whether a lock exists and optional metadata associated with it.
DescribeLockRequest { namespace: string name: string}
read/write locks
Section titled “read/write locks”Use the create_rw_lock
, acquire_rw_lock
and release_rw_lock
RPCs to synchronize readers and writers. Multiple readers may hold the lock at once but writers have exclusive access. The create_rw_lock
call accepts an optional ttl_ms
similar to create_lock
.
AcquireRWLockRequest { namespace: string name: string write: bool timeout_ms: uint64}
SDK Examples
Section titled “SDK Examples”Python SDK - Exclusive Locks
Section titled “Python SDK - Exclusive Locks”import primatomicimport time
client = primatomic.Client("your-workspace.primatomic.com:443", jwt_token="your-api-token")
# Basic lock usage with context managerwith client.lock("myapp", "resource_lock"): # Critical section - only one client can execute this at a time print("Performing exclusive operation...") time.sleep(1) print("Operation complete")
# Manual lock management with timeoutlock = client.lock("myapp", "batch_job")if lock.acquire(timeout_ms=5000): try: # Perform batch processing process_batch_data() finally: await client.release_lock("myapp", "batch_job")except TimeoutError: print("Could not acquire lock within 5 seconds")
# Lock with automatic TTL (expires in 30 seconds)await client.create_lock("myapp", "temp_lock", ttl_ms=30000)
Python SDK - Read/Write Locks
Section titled “Python SDK - Read/Write Locks”# Multiple readers can access concurrentlyasync with client.rw_lock("myapp", "data_cache", write=False): # Read operation - multiple clients can execute simultaneously data = read_cached_data() process_read_only(data)
# Exclusive writer accessasync with client.rw_lock("myapp", "data_cache", write=True): # Write operation - blocks all readers and other writers new_data = generate_fresh_data() update_cache(new_data)
# Manual RW lock with timeouttry: await client.acquire_rw_lock("myapp", "shared_resource", write=True, timeout_ms=10000) try: # Exclusive write access modify_shared_resource() finally: await client.release_rw_lock("myapp", "shared_resource")except TimeoutError: print("Could not acquire write lock within 10 seconds")
Lock Patterns
Section titled “Lock Patterns”# Pattern: Resource pool managementasync def get_worker_from_pool(): async with client.lock("myapp", "worker_pool"): worker_id = allocate_next_worker() if worker_id: return worker_id else: raise ResourceExhausted("No workers available")
# Pattern: Cache coherency with RW locksasync def read_from_cache(key): async with client.rw_lock("myapp", f"cache:{key}", write=False): return cache.get(key)
async def update_cache(key, value): async with client.rw_lock("myapp", f"cache:{key}", write=True): cache.set(key, value) await invalidate_related_entries(key)
# Pattern: Leader electionasync def try_become_leader(): try: await client.acquire_lock("myapp", "leader_election", timeout_ms=1000) # This process is now the leader return True except TimeoutError: # Another process is the leader return False
Lock State and Diagnostics
Section titled “Lock State and Diagnostics”# Check lock statuslock_info = await client.describe_lock("myapp", "my_lock")if lock_info.exists: print(f"Lock owned by: {lock_info.owner}") print(f"Expires at: {lock_info.expires_at}") print(f"Waiting clients: {len(lock_info.waiters)}")
# Force release stuck locks (admin operation)await client.force_release_lock("myapp", "stuck_lock")
Error Handling
Section titled “Error Handling”Common lock error conditions:
FAILED_PRECONDITION
: Attempting to release a lock you don’t ownDEADLINE_EXCEEDED
: Lock acquisition timed outPERMISSION_DENIED
: Invalid credential ID or insufficient namespace accessUNAVAILABLE
: Cluster is not ready or no leader electedALREADY_EXISTS
: Lock creation failed (rare edge case)
Best Practices
Section titled “Best Practices”Lock Design
Section titled “Lock Design”- Granular Locking: Use specific lock names rather than global locks
- Consistent Naming: Establish lock naming conventions (
resource_type:resource_id
) - TTL Usage: Always set appropriate TTL to prevent deadlocks from crashed clients
- Timeout Strategy: Use reasonable timeouts to avoid indefinite blocking
Performance Optimization
Section titled “Performance Optimization”- Read/Write Preference: Use RW locks when you have read-heavy workloads
- Lock Scope: Minimize critical section duration
- Avoid Nested Locks: Prevent deadlock by acquiring locks in consistent order
- Connection Reuse: Share client connections across lock operations
Operational Considerations
Section titled “Operational Considerations”- Monitoring: Track lock acquisition times and contention metrics
- Alerting: Monitor for locks held longer than expected
- Cleanup: Use force release for administrative cleanup of abandoned locks