Amazon::S3::Bucket(3pm) | User Contributed Perl Documentation | Amazon::S3::Bucket(3pm) |
Amazon::S3::Bucket - A container class for a S3 bucket and its contents.
use Amazon::S3; # creates bucket object (no "bucket exists" check) my $bucket = $s3->bucket("foo"); # create resource with meta data (attributes) my $keyname = 'testing.txt'; my $value = 'T'; $bucket->add_key( $keyname, $value, { content_type => 'text/plain', 'x-amz-meta-colour' => 'orange', } ); # list keys in the bucket $response = $bucket->list or die $s3->err . ": " . $s3->errstr; print $response->{bucket}."\n"; for my $key (@{ $response->{keys} }) { print "\t".$key->{key}."\n"; } # check if resource exists. print "$keyname exists\n" if $bucket->head_key($keyname); # delete key from bucket $bucket->delete_key($keyname);
Instaniates a new bucket object.
Pass a hash or hash reference containing various options:
default: 4K
default: false
NOTE: This method does not check if a bucket actually exists unless you set "verify_region" to true. If the bucket does not exist, the constructor will set the region to the default region specified by the Amazon::S3 object ("account") that you passed.
Typically a developer will not call this method directly, but work through the interface in S3::Amazon that will handle their creation.
add_key( key, value, configuration)
Write a new or existing object to S3.
Returns a boolean indicating the sucess or failure of the call. Check "err" and "errstr" for error messages if this operation fails. To examine the raw output of the response from the API call, use the "last_response()" method.
my $retval = $bucket->add_key('foo', $content, {}); if ( !$retval ) { print STDERR Dumper([$bucket->err, $bucket->errstr, $bucket->last_response]); }
The method works like "add_key" except the value is assumed to be a filename on the local file system. The file will be streamed rather then loaded into memory in one big chunk.
Copies an object from one bucket to another bucket. Note that the bucket represented by the bucket object is the destination. Returns a hash reference to the response object ("CopyObjectResult").
Headers returned from the request can be obtained using the "last_response()" method.
my $headers = { $bucket->last_response->headers->flatten };
Throws an exception if the response code is not 2xx. You can get an extended error message using the "errstr()" method.
my $result = eval { return $s3->copy_object( key => 'foo.jpg', source => 'boo.jpg' ); }; if ($@) { die $s3->errstr; }
Examples:
$bucket->copy_object( key => 'foo.jpg', source => 'boo.jpg' ); $bucket->copy_object( key => 'foo.jpg', source => 'boo.jpg', bucket => 'my-source-bucket' ); $bucket->copy_object( key => 'foo.jpg', headers => { 'x-amz-copy-source' => 'my-source-bucket/boo.jpg' );
See CopyObject for more details.
%parameters is a list of key/value pairs described below:
Returns a configuration HASH of the given key. If a key does not exist in the bucket "undef" will be returned.
HASH will contain the following members:
Takes a key and an optional HTTP method and fetches it from S3. The default HTTP method is GET.
The method returns "undef" if the key does not exist in the bucket and throws an exception (dies) on server errors.
On success, the method returns a HASHREF containing:
This method works like "get_key", but takes an added filename that the S3 resource will be written to.
Permanently removes $key_name from the bucket. Returns a boolean value indicating the operations success.
Permanently removes the bucket from the server. A bucket cannot be removed if it contains any keys (contents).
This is an alias for "$s3-"delete_bucket($bucket)>.
List all keys in this bucket.
See "list_bucket" in Amazon::S3 for documentation of this method.
See "list_bucket_v2" in Amazon::S3 for documentation of this method.
List all keys in this bucket without having to worry about 'marker'. This may make multiple requests to S3 under the hood.
See "list_bucket_all" in Amazon::S3 for documentation of this method.
Same as "list_all" but uses the version 2 API for listing keys.
See "list_bucket_all_v2" in Amazon::S3 for documentation of this method.
Retrieves the Access Control List (ACL) for the bucket or resource as an XML document.
set_acl(acl)
Retrieves the Access Control List (ACL) for the bucket or resource. Requires a HASHREF argument with one of the following keys:
According to the Amazon S3 API documentation the following recognized acl_short types are defined as follows:
Returns a boolean indicating the operations success.
Returns the location constraint (region the bucket resides in) for a bucket. Returns undef if no location constraint.
Valid values that may be returned:
af-south-1 ap-east-1 ap-northeast-1 ap-northeast-2 ap-northeast-3 ap-south-1 ap-southeast-1 ap-southeast-2 ca-central-1 cn-north-1 cn-northwest-1 EU eu-central-1 eu-north-1 eu-south-1 eu-west-1 eu-west-2 eu-west-3 me-south-1 sa-east-1 us-east-2 us-gov-east-1 us-gov-west-1 us-west-1 us-west-2
For more information on location constraints, refer to the documentation for GetBucketLocation <https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html>.
The S3 error code for the last error the account encountered.
A human readable error string for the last error the account encountered.
The decoded XML string as a hash object of the last error.
Returns the last "HTTP::Response" to an API call.
From Amazon's website:
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.
See <https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html> for more information about multipart uploads.
A multipart upload begins by calling "initiate_multipart_upload()". This will return an identifier that is used in subsequent calls.
my $bucket = $s3->bucket('my-bucket'); my $id = $bucket->initiate_multipart_upload('some-big-object'); my $part_list = {}; my $part = 1; my $etag = $bucket->upload_part_of_multipart_upload('my-bucket', $id, $part, $data, length $data); $part_list{$part++} = $etag; $bucket->complete_multipart_upload('my-bucket', $id, $part_list);
upload_multipart_object( ... )
Convenience routine "upload_multipart_object" that encapsulates the multipart upload process. Accepts a hash or hash reference of arguments. If successful, a reference to a hash that contains the part numbers and etags of the uploaded parts.
You can pass a data object, callback routine or a file handle.
default: true
abort_multipart_upload(key, multpart-upload-id)
Abort a multipart upload
complete_multipart_upload(key, multpart-upload-id, parts)
Signal completion of a multipart upload. "parts" is a reference to a hash of part numbers and etags.
initiate_multipart_upload(key, headers)
Initiate a multipart upload. Returns an id used in subsequent call to "upload_part_of_multipart_upload()".
List all the uploaded parts of a multipart upload
List multipart uploads in progress
upload_part_of_multipart_upload(key, id, part, data, length)
Upload a portion of a multipart upload
Amazon::S3
Please see the Amazon::S3 manpage for author, copyright, and license information.
Rob Lauer Jojess Fournier Tim Mullin Todd Rinaldo
Hey! The above document had some coding errors, which are explained below:
2023-02-10 | perl v5.36.0 |