How to: Upload (Advanced)
Introduction
Reliable asset uploading is the core function of every C2C integration. This guide provides advanced techniques and best practices for creating a robust, resilient, and efficient upload system that performs well even in challenging environments.
Prerequisites
If you haven’t already, please review the Implementing C2C: Setting Up guide before proceeding.
You’ll need the access_token
obtained during the authentication and authorization process.
We’ll continue using the same test asset from the Basic Uploads guide.
Advanced Asset Parameters
When creating assets in Frame.io, you can use several advanced parameters to customize upload behavior. The offset
parameter is particularly important for proper integration.
Offset - Handling Paused Devices
Providing an accurate offset
value is critical. This parameter specifies when a piece of media was created and ensures your device doesn’t upload content that shouldn’t be shared. When a device is paused in Frame.io, the user is indicating that media created during the pause should not be uploaded. For more details, see our guide on pause functionality.
Additional Benefits of the offset Parameter
The offset
parameter provides another significant advantage for organizing media withing Frame.io. When uploading content captured at an earlier date—for example, when a user selects a photo taken the previous week during playback—the offset
parameter ensures this media appears in folders corresponding to its original capture date rather than the current upload date.
This chronological organization maintains a logical timeline in the Frame.io project structure. Without the offset
parameter, historical media would incorrectly appear grouped with today’s content, potentially causing confusion for editors and other collaborators.
You may wish to provide users with a choice in this matter through your interface. If users prefer to organize all uploads by the current date regardless of when the media was captured, you can simply omit the offset
parameter, as it defaults to 0
when not specified.
Our API design eliminates the need for your device to track pause status. Instead, when uploading a file, you indicate how many seconds ago the file was created. Our server compares this against pause windows and rejects the upload if it was created during a pause.
To demonstrate this feature, pause your device from the three-dot menu in the C2C Connections tab.
Now attempt to upload an asset:
API endpoint specification
Documentation for /v2/devices/assets
can be found here.
You’ll receive this error:
If you unpause the device and retry with the same request, the asset will be created.
However, if the asset was created during the pause window, you need to set the offset
to reflect when it was actually created:
This tells Frame.io the asset was created 60 seconds ago (during the pause), which properly triggers the Channel Paused
error.
Accurate offset
values are essential to prevent uploading sensitive content against the user’s wishes, including protected intellectual property, sensitive footage, or other restricted material.
Offset and retries
When retrying a failed asset creation call, remember to update the offset
value. During extended retry periods, a static offset might drift out of the relevant pause window, potentially allowing uploads that should be blocked.
Uploading to a Specific Channel
If your device has multiple channels, you can specify which one to use:
If not specified, the default channel is 0
. Most integrations won’t need to change this value.
Requesting a Custom Chunk Count
By default, Frame.io’s backend divides files into approximately 25MB chunks. For networks with high congestion, you might prefer smaller chunks. You can request a specific number of chunks with the parts
parameter:
The response will include four upload URLs:
The chunk size will be:
The last chunk will be 5,284,061 bytes (calculated as 21136250 - 5284063 * 3
).
When requesting custom chunk counts, be aware of AWS S3 multipart upload limitations:
- Each part must be at least 5 MiB (5,242,880 bytes), except for the final part
- There can be no more than 10,000 parts
If your request violates these constraints, you’ll receive a 500: INTERNAL SERVER ERROR
:
Always verify your custom part count conforms to S3’s requirements.
Uploading Efficiently
C2C devices often operate in challenging network environments, so efficiency is crucial. Here are strategies to maximize throughput.
TCP Connection Reuse/Pooling
Establishing encrypted connections requires significant negotiation overhead. For efficient operation, reuse TCP connections when making multiple requests. Most HTTP libraries provide a Client
or Session
abstraction that maintains persistent connections.
The negotiation process for a new HTTPS connection includes cryptographic handshakes and certificate validation. By reusing connections, you only perform this overhead once rather than for each request.
TCP handshake reference
For technical details on TLS handshake processes, see Cloudflare’s explanation.
To demonstrate connection reuse with curl
, first create a new asset in Frame.io as described in the basic upload guide.
Next, split the file into separate chunks for testing:
Now upload both chunks over a single TCP connection using curl’s --next
parameter:
Compare this to separate connections:
Reusing chunk URLs
You can upload to the same chunk URL multiple times, so feel free to reuse URLs between examples.
In testing, connection reuse typically improves performance by 15-20% for sequential uploads.
Parallel Uploads
For even greater throughput, upload multiple chunks simultaneously:
With sufficient bandwidth, parallel uploads complete in approximately the time of the slowest individual upload.
For optimal parallelism, a good rule of thumb is two concurrent uploads per CPU core. Exceeding this ratio can lead to resource contention and diminishing returns.
Parallel upload speeds
Network conditions significantly impact parallel upload performance. In some environments, sequential uploads may outperform parallel ones. Advanced implementations might monitor throughput and dynamically adjust concurrency. Always profile performance in your actual production environment rather than relying on example timing.
Combining Both Approaches
For maximum efficiency, combine connection pooling with parallel uploads. Create multiple processes, each using connection pooling for its own sequence of uploads:
HTTP library features
Most HTTP libraries provide abstractions for connection pooling and parallel requests. Experiment with your library’s options to determine the optimal configuration for your environment.
Tracking Upload Progress
Your integration must provide basic progress indication to users. Chunk-level granularity is acceptable—for a three-chunk upload, progress might increment from 0% → 33% → 66% → 100% as each chunk completes.
Finer-grained progress reporting depends on your HTTP library’s capabilities. Contact our team if you need guidance on implementing more detailed progress tracking.
Uploading Reliably
For robust error handling, review our errors guide. The following sections assume you’ve implemented the error handling strategies described there.
Creating a production-quality uploader requires additional considerations beyond handling individual request errors.
Creating an Upload Queue
In real-world scenarios, your device may generate media faster than it can upload, or it might experience extended connection interruptions. Implementing a queuing system separates media creation from upload management.
Consider a two-queue architecture:
- A media queue for registering local files with Frame.io
- A chunk queue for uploading individual file chunks
Here’s a simplified implementation:
Error handling
In the above example, we assume that the functions invoked for c2c calls are handling errors as discussed in the errors guide.
Persistent Queuing Across Power Cycles
The in-memory queue approach works well while the device remains powered on, but what happens if power is lost before uploads complete? To create a truly resilient integration, we need to ensure the device can resume from where it left off after restarting.
This requires persisting the queue state to storage between power cycles. An embedded database such as SQLite provides an excellent foundation for this functionality.
Your persistent queue implementation should support these key operations:
- Adding newly created files to the upload queue
- Tracking when assets are successfully created in Frame.io
- Recording when asset creation fails due to errors
- Storing file chunk information for upload tasks
- Retrieving the next chunk to be uploaded
- Marking chunks as successfully uploaded
- Logging chunk upload failures
- Providing file status information for user display
Here’s how we might adapt our previous example to use a persistent storage system:
With this persistent storage approach, your integration becomes resilient to power interruptions. When the device restarts, it simply continues processing from its last saved state. This architecture also provides the foundation for implementing more advanced features, like error tracking and stalled upload detection.
Tracking Upload Errors
A robust upload system must carefully track errors. After retrying an operation using the strategies in the errors guide, record these failures in your persistence store. This allows your system to:
- Deprioritize problematic uploads to prevent them from blocking the entire queue
- Provide accurate status information to users
- Enable administrative intervention for persistent issues
When a fatal error occurs, mark the item to prevent unnecessary retry attempts.
Managing Stalled Uploads
Implement safeguards against indefinitely stalled uploads. Set a maximum duration (e.g., 30 minutes) after which a chunk upload task should be terminated and restarted. This prevents scenarios where all upload workers become blocked by non-responsive operations.
Recovering From Silent Failures
System crashes, power loss, or process termination can prevent normal error reporting. When retrieving items from your queue, record the checkout time. If an item remains in the “in progress” state beyond a reasonable threshold (e.g., 30 minutes) without reporting success or failure, automatically return it to the available pool for processing by another worker.
Mitigating Poisoned Uploads
A “poisoned” queue item consistently fails due to inherent problems with the data or environment. If these items continuously requeue, they can effectively block your entire upload system. Consider these strategies for handling such cases:
- After multiple failures, deprioritize the item so newer content can proceed
- Track both explicit errors and the number of processing attempts
- Follow connection and authorization best practices to distinguish between transient environmental issues and intrinsic file problems
- Implement escalating retry limits (e.g., retry individual operations 10 times within each of 3 job attempts, for 30 total attempts)
- Provide a user interface for manually resetting problematic uploads once environmental issues are resolved
Poisoned uploads can result from:
- Corrupted file data causing I/O errors
- Catastrophic process failures that prevent error reporting
- Normally retriable errors triggered by permanent underlying conditions
Retry After System Restart
Before permanently abandoning problematic uploads, flag them for one final retry after the next system restart. This addresses cases where uploads fail due to temporary system state issues with memory, drivers, or resource allocation. If an upload continues to fail after a clean restart, you can more confidently mark it as permanently problematic.
Clearing Your Queue
Remember to remove unavailable files from your queue. When media is physically removed or files are deleted, purge corresponding entries from your upload queue to prevent unnecessary errors.
Importantly, you must clear your upload queue when connecting to a new project. Media queued for one project should never appear in another. When a user pairs the device with a different project, verify whether the project has changed and, if so, completely clear the existing queue.
Next Steps
We encourage you to contact our team with any questions and proceed to the advanced uploads guide. We look forward to supporting your integration progress.