How to Upload a Large File to S3 Rsync

How To Encrypt and Upload Big Files to Amazon S3 in Laravel

Source: Wikipedia Commons

Last week I wrote an article chosen How to encrypt large files in Laravel, explaining how to encrypt large files on the local filesystem. Today I want to show you how to upload and encrypt large files to Amazon S3, using Laravel and the FileVault package.

First, I'll explain a couple of concepts and methods that nosotros will be using, then we'll write the lawmaking.

Streaming Files to Amazon S3

Laravel already ships with all the tools needed to upload a file to Amazon S3. If you don't know how to exercise that already, accept a await at the putFile and putFileAs functions on the Storage facade. With any of these two functions, Laravel will automatically manage streaming a file to a storage location, such as Amazon S3. All you need to do is something similar this:

                Storage::disk('s3')->putFile('photos', new File('/path/to/photo'));              

Streaming a file to S3 may take a long time, depending on the network speed. Even if the putFile and putFileAs functions stream the file in segments and won't consume a lot of retention, this is even so a task that may stop upward taking a lot of time to complete, causing timeouts. That's why information technology's recommended to use queued jobs for this operation.

Using Queued Jobs

Queues allow you to defer the processing of a time-consuming task. Deferring these time-consuming tasks drastically speeds upwardly spider web requests to your application.

We will use two separate queued jobs, one to encrypt the file and another one to upload the encrypted file to Amazon S3.

In Laravel, y'all can chain queued jobs so that the jobs will run in sequence. This mode, nosotros tin can commencement uploading the file to S3 immediately later on the file has been encrypted.

Let'due south Start Coding

In this tutorial, we volition build the encrypt and upload functionalities to S3, on acme of the app created in our previous tutorial. If you lot haven't already seen my previous piece, here it is.

Every bit a quick recap, we take built a simple app where users can log in and upload files that will be encrypted equally presently as the upload finishes.

Configure Amazon S3

Starting time, you will need to configure S3 on Amazon side and create a bucket where we will store the encrypted files. This tutorial does a great job of explaining how to create a bucket, add the proper policies, associate an IAM user to it and add the AWS variables to your .env file.

Equally per the Laravel docs, we besides need to install the Flysystem adapter package via Composer:

                composer require league/flysystem-aws-s3-v3              

We likewise need to install an additional package for a cached adapter — an absolute must for performance:

                composer require league/flysystem-cached-adapter              

Creating Queueable Jobs

Next, let's create the two queueable jobs that we apply for encryption and uploading to S3:

                php artisan make:job EncryptFile                php artisan brand:job MoveFileToS3              

This will create two files in app/Http/Jobs : EncryptFile.php and MoveFileToS3.php. These jobs will have a param in the constructor, which represents the filename. We add the functionality of encrypting and uploading to S3 in the handle method. This is what the two jobs await like:

                <?php

namespace App\Jobs;

utilize Illuminate\Bus\Queueable;
utilise Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Motorbus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
utilise Illuminate\Queue\SerializesModels;
utilize SoareCostin\FileVault\Facades\FileVault;

class EncryptFile implements ShouldQueue
{
apply Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

protected $filename;

/**
* Create a new job case.
*
* @return void
*/
public function __construct($filename)
{
$this->filename = $filename;
}

/**
* Execute the chore.
*
* @return void
*/
public function handle()
{
FileVault::encrypt($this->filename);
}
}

<?php

namespace App\Jobs;

utilize Exception;
utilize Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Motorcoach\Dispatchable;
employ Illuminate\Http\File;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
utilise Illuminate\Back up\Facades\Storage;

class MoveFileToS3 implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

protected $filename;

/**
* Create a new task case.
*
* @return void
*/
public role __construct($filename)
{
$this->filename = $filename . '.enc';
}

/**
* Execute the job.
*
* @return void
*/
public office handle()
{
// Upload file to S3
$outcome = Storage::deejay('s3')->putFileAs(
'/',
new File(storage_path('app/' . $this->filename)),
$this->filename
);

// Forces drove of any existing garbage cycles
// If we don't add together this, in some cases the file remains locked
gc_collect_cycles();

if ($result == false) {
throw new Exception("Couldn't upload file to S3");
}

// delete file from local filesystem
if (!Storage::deejay('local')->delete($this->filename)) {
throw new Exception('File could non be deleted from the local filesystem ');
}
}
}

As you lot can encounter, the EncryptFile job is elementary — we are simply using the FileVault package to encrypt a file and save it into the same directory, with the same name and the .enc extension. It's exactly what we were doing before, in the HomeController'due south shop method.

For the MoveFileToS3 job, nosotros are first using the Laravel putFileAs method that volition automatically stream our file to S3, following the same directory convention as we had on the local filesystem.

Nosotros are then calling the PHP gc_collect_cycles function, in order to force drove of any existing garbage cycles. In some cases, if nosotros don't run this part then the file will remain locked and we won't be able to delete it in the next step.

Finally, nosotros are deleting the file from the filesystem and throwing Exceptions if the upload or the delete processes fail.

Updating the Controller

Now let'due south update the HomeController.php file to friction match the new functionality.

Instead of encrypting the file inline using the FileVault package with the shop method, we phone call to acceleration the newly created queued jobs, chained together:

                EncryptFile::withChain([
new MoveFileToS3($filename),
])->dispatch($filename);

Adjacent, in the index method, we transport both the local files and the S3 files of a user to the view, and then we can brandish the files that are in the process of encrypting and streaming to S3 together with the files that are already encrypted and stored in S3:

                $localFiles = Storage::files('files/' . auth()->user()->id);
$s3Files = Storage::disk('s3')->files('files/' . auth()->user()->id);
return view('home', compact('localFiles', 's3Files'));

We also update our downloadFile, specifying that nosotros want to download and stream the file from S3 instead of the local filesystem. Nosotros but concatenation a disk('s3') call to both the Storage and FileVault facades.

This is what the HomeController.php file looks like:

                <?php

namespace App\Http\Controllers;

use App\Jobs\EncryptFile;
apply App\Jobs\MoveFileToS3;
use Illuminate\Http\Request;
use Illuminate\Back up\Facades\Storage;
use Illuminate\Support\Str;
utilize SoareCostin\FileVault\Facades\FileVault;

class HomeController extends Controller
{
/**
* Create a new controller example.
*
* @return void
*/
public office __construct()
{
$this->middleware('auth');
}

/**
* Testify the awarding dashboard.
*
* @return \Illuminate\Contracts\Back up\Renderable
*/
public function alphabetize()
{
$localFiles = Storage::files('files/' . auth()->user()->id);
$s3Files = Storage::disk('s3')->files('files/' . auth()->user()->id);

return view('home', compact('localFiles', 's3Files'));
}

/**
* Shop a user uploaded file
*
* @param \Illuminate\Http\Asking $request
* @render \Illuminate\Http\Response
*/
public office store(Request $asking)
{
if ($request->hasFile('userFile') && $request->file('userFile')->isValid()) {
$filename = Storage::putFile('files/' . auth()->user()->id, $request->file('userFile'));

// check if we have a valid file uploaded
if ($filename) {
EncryptFile::withChain([
new MoveFileToS3($filename),
])->acceleration($filename);
}
}

return redirect()->road('dwelling')->with('bulletin', 'Upload consummate');
}

/**
* Download a file
*
* @param string $filename
* @return \Illuminate\Http\Response
*/
public part downloadFile($filename)
{
// Basic validation to check if the file exists and is in the user directory
if (!Storage::disk('s3')->has('files/' . auth()->user()->id . '/' . $filename)) {
abort(404);
}

return response()->streamDownload(office () use ($filename) {
FileVault::disk('s3')->streamDecrypt('files/' . auth()->user()->id . '/' . $filename);
}, Str::replaceLast('.enc', '', $filename));
}

}

Updating the View

The last matter we need to do is update the domicile.bract.php view file, so that nosotros tin can brandish non only the user files that accept been encrypted and are stored to S3 but likewise the files that are being encrypted and uploaded to S3 at that moment.

Note: Y'all tin can make this step much more than engaging past using JavaScript to bear witness a spinning icon for the files that are beingness encrypted and streamed to S3, and refreshing the table once the files take been uploaded. Because we desire to keep this tutorial strictly to the point of deferring the encryption and S3 upload to a separate process, we'll stick to a basic solution that requires manual refresh in order to encounter any updates to the queued jobs status.

                <h4>Your files</h4>
<ul class="list-group">
@forelse ($s3Files as $file)
<li class="list-grouping-item">
<a href="{{ route('downloadFile', basename($file)) }}">
{{ basename($file) }}
</a>
</li>
@empty
<li grade="list-grouping-particular">You have no files</li>
@endforelse
</ul>

@if (!empty($localFiles))
<hr />
<h4>Uploading and encrypting...</h4>
<ul class="listing-group">
@foreach ($localFiles every bit $file)
<li class="list-group-particular">
{{ basename($file) }}
</li>
@endforeach
</ul>
@endif

Queue Configuration

If you oasis't made any changes to the queues configuration, you lot are nearly likely using the synchronous driver (sync) that is set by default in Laravel. This is a driver that volition execute jobs immediately and is designed specifically for local apply. However, we desire to see how deferring our two queued jobs volition work in production, and then nosotros will configure the queues to work with the database commuter.

In order to use the database queue driver, you volition need a database table to hold the jobs. To generate a migration that creates this table, run the queue:table Artisan command. Once the migration has been created, you may drift your database using the drift command:

                php artisan queue:table                php artisan migrate                                                

The last step is updating your QUEUE_CONNECTION variable in your .env file to use the database driver:

                QUEUE_CONNECTION=database              

Running the Queue Worker

Next, we need to run the queue worker. Laravel includes a queue worker that volition process new jobs every bit they are pushed onto the queue. Yous may run the worker using the queue:piece of work Artisan command. You can specify the maximum number of times a job should be attempted using the —-tries switch on the queue:work command

                php artisan queue:work —-tries=3              

Time to Examination

We're at present prepare to test our changes. Once you upload a file, you should see that the file is immediately displayed in the "Uploading and encrypting…" department.

If yous switch to the terminal where you lot initiated the queue worker, yous should see that the jobs are starting in sequence. Once both jobs are completed, the file should be establish in S3 and no longer in the local filesystem.

Refreshing the user dashboard after the jobs have finished should brandish the file into "Your files" department, with a link to stream download it from S3.

That's it!

You can detect the entire Laravel app in this Github repo and the changes made to a higher place in this commit.

arandayoublituff51.blogspot.com

Source: https://betterprogramming.pub/how-to-encrypt-upload-large-files-to-amazon-s3-in-laravel-af88324a9aa

0 Response to "How to Upload a Large File to S3 Rsync"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel