Quantcast
Channel: Baeldung
Viewing all articles
Browse latest Browse all 4536

Using the JetS3t Java Client With Amazon S3

$
0
0

1. Overview

In this tutorial, we’ll use the JetS3t library with Amazon S3.

Simply put, we’ll create buckets, write data to them, read data back, copy it, and then list and delete them.

2. JetS3t Setup

2.1. Maven Dependency

First, we need to add the NATS library and Apache HttpClient to our pom.xml:

<dependency>
    <groupId>org.lucee</groupId>
    <artifactId>jets3t</artifactId>
    <version>0.9.4.0006L</version>
</dependency>
<dependency>
    <groupId>org.apache.httpcomponents</groupId>
    <artifactId>httpclient</artifactId>
    <version>4.5.5</version>
</dependency>

Maven Central has the latest version of the JetS3t library and the latest version of HttpClient. The source for JetS3t can be found here.

We’ll be using Apache Commons Codec for one of our tests, so we’ll add that to our pom.xml too:

<dependency>
    <groupId>org.lucee</groupId>
    <artifactId>commons-codec</artifactId>
    <version>1.10.L001</version>
</dependency>

Maven Central has the latest version here.

2.2. Amazon AWS Keys

We’ll need AWS Access Keys to connect to the S3 storage service. A free account can be created here.

After we have an account, we need to create a set of security keys.There’s documentation about users and access keys available here.

JetS3t using Apache Commons logging, so we’ll use it too when we want to print information about what we’re doing.

3. Connecting to a Simple Storage

Now that we have an AWS access key and secret key, we can connect to S3 storage.

3.1. Connecting to AWS

First, we create AWS credentials and then use them to connect to the service:

AWSCredentials awsCredentials 
  = new AWSCredentials("access key", "secret key");
s3Service = new RestS3Service(awsCredentials);

RestS3Serviceis our connection to Amazon S3.It uses HttpClientto communicate with S3 over REST.

3.2. Verifying Connection

We can verify that we’ve successfully connected to the service by listing buckets:

S3Bucket[] myBuckets = s3Service.listAllBuckets();

Depending on whether or not we’ve created buckets before, the array may be empty, but if the operation doesn’t throw an exception, we have a valid connection.

4. Buckets Management

With a connection to Amazon S3, we can create buckets to hold our data. S3 is an object storage system. Data is uploaded as objects and stored in buckets.

Since all S3 buckets share the same global namespace, each one must have a unique name.

4.1. Creating a Bucket

Let’s try to create a bucket name “mybucket“:

S3Bucket bucket = s3Service.createBucket("mybucket");

This fails with an exception:

org.jets3t.service.S3ServiceException: Service Error Message.
  -- ResponseCode: 409, ResponseStatus: Conflict, XML Error Message:
  <!--?xml version="1.0" encoding="UTF-8"?-->
  <code>BucketAlreadyExists</code> The requested bucket name is not available. 
  The bucket namespace is shared by all users of the system.
  Please select a different name and try again.
  mybucket 07BE34FF3113ECCF 
at org.jets3t.service.S3Service.createBucket(S3Service.java:1586)

The name “mybucket” is, predictably, already taken. For the rest of the tutorial, we’ll make up our names.

Let’s try again with a different name:

S3Bucket bucket = s3Service.createBucket("myuniquename");
log.info(bucket);

With a unique name, the call succeeds, and we see information about our bucket:

[INFO] JetS3tClient - S3Bucket
[name=myuniquename,location=US,creationDate=Sat Mar 31 16:47:47 EDT 2018,owner=null]

4.2. Deleting a Bucket

Deleting a bucket is as easy as creating it, except for one thing; buckets must be empty before they can be removed!

s3Service.deleteBucket("myuniquename");

This will throw an exception for a bucket that is not empty.

4.3. Specifying the Bucket Region

Buckets can be created in a specific data center.For JetS3t the default is Northern Virginia in the United States, or “us-east-1.”

We can override this by specifying a different region:

S3Bucket euBucket 
  = s3Service.createBucket("eu-bucket", S3Bucket.LOCATION_EUROPE);
S3Bucket usWestBucket = s3Service
  .createBucket("us-west-bucket", S3Bucket.LOCATION_US_WEST);
S3Bucket asiaPacificBucket = s3Service
  .createBucket("asia-pacific-bucket", S3Bucket.LOCATION_ASIA_PACIFIC);

JetS3t has an extensive list of regions defined as constants.

5. Upload, Download, and Delete Data

Once we have a bucket, we can add objects to it. Buckets are intended to be long-lasting, and there’s no hard limit on the size or number of objects a bucket can contain.

Data is uploaded to S3 by creating S3Objects.We can upload data a from an InputStream,but JetS3t also provides convenience methods for Stringsand Files.

5.1. StringData

Let’s take a look at Stringsfirst:

S3Object stringObject = new S3Object("object name", "string object");
s3Service.putObject("myuniquebucket", stringObject);

Similar to buckets, objects have names, however, object names only live inside their buckets, so we don’t have to worry about them being globally unique.

We create the object by passing a name and the data to the constructor. Then we store it with putObject.

When we use this method to store Stringswith JetS3t, it sets the correct content type for us.

Let’s query S3 for information about our object and look at the content type:

StorageObject objectDetailsOnly 
  = s3Service.getObjectDetails("myuniquebucket", "my string");
log.info("Content type: " + objectDetailsOnly.getContentType() + " length: " 
  + objectDetailsOnly.getContentLength());

ObjectDetailsOnly()retrieves the objects metadata without downloading it. When we log the content type we see:

[INFO] JetS3tClient - Content type: text/plain; charset=utf-8 length: 9

JetS3t identified the data as text and set the length for us.

Let’s download the data and compare it to what we uploaded:

S3Object downloadObject = 
  s3Service.getObject("myuniquebucket, "string object");
String downloadString = new BufferedReader(new InputStreamReader(
  object.getDataInputStream())).lines().collect(Collectors.joining("\n"));
 
assertTrue("string object".equals(downloadString));

Data is retrieved in the same S3Objectwe use to upload it, with the bytes available in a DataInputStream.

5.2. File Data

The process for uploading files is similar to Strings:

File file = new File("src/test/resources/test.jpg");
S3Object fileObject = new S3Object(file);
s3Service.putObject("myuniquebucket", fileObject);

When S3Objectsare passed a Filethey derive their name from the base name of the files they contain:

[INFO] JetS3tClient - File object name is test.jpg

JetS3t takes the Fileand uploads it for us.It will attempt to load a mime.types filefrom the classpath and use it to identify the type of file and sent content type appropriately.

If we retrieve the object info of our file upload and get the content type we see:

[INFO] JetS3tClient - Content type:application/octet-stream

Let’s download our file to a new one and compare the contents:

String getFileMD5(String filename) throws IOException {
    try (FileInputStream fis = new FileInputStream(new File(filename))) {
        return DigestUtils.md5Hex(fis);
    }
}

S3Object fileObject = s3Service.getObject("myuniquebucket", "test.jpg"); 
File newFile = new File("/tmp/newtest.jpg"); 
Files.copy(fileObject.getDataInputStream(), newFile.toPath(), 
  StandardCopyOption.REPLACE_EXISTING);
String origMD5 = getFileMD5("src/test/resources/test.jpg");
String newMD5 = getFileMD5("src/test/resources/newtest.jpg");
assertTrue(origMD5.equals(newMD5));

Similar to Stringswe downloaded the object and used the DataInputStream to create a new file. Then we calculated an MD5 hash for both files and compared them.

5.3. Streaming Data

When we upload objects other than Stringsor Files,we have a bit more work to do:

ArrayList<Integer> numbers = new ArrayList<>();
// adding elements to the ArrayList

ByteArrayOutputStream bytes = new ByteArrayOutputStream();
ObjectOutputStream objectOutputStream = new ObjectOutputStream(bytes);
objectOutputStream.writeObject(numbers);

ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes.toByteArray());

S3Object streamObject = new S3Object("stream");
streamObject.setDataInputStream(byteArrayInputStream);
streamObject.setContentLength(byteArrayInputStream.available());
streamObject.setContentType("binary/octet-stream");

s3Service.putObject(BucketName, streamObject);

We need to set our content type and length before uploading.

Retrieving this stream means reversing the process:

S3Object newStreamObject = s3Service.getObject(BucketName, "stream");

ObjectInputStream objectInputStream = new ObjectInputStream(
  newStreamObject.getDataInputStream());
ArrayList<Integer> newNumbers = (ArrayList<Integer>) objectInputStream
  .readObject();

assertEquals(2, (int) newNumbers.get(0));
assertEquals(3, (int) newNumbers.get(1));
assertEquals(5, (int) newNumbers.get(2));
assertEquals(7, (int) newNumbers.get(3));

For different data types, the content type property can be used to select a different method for decoding the object.

6. Copying, Moving and Renaming Data

6.1. Copying Objects

Objects can be copied inside S3, without retrieving them.

Let’s copy our test file from section 5.2, and verify the result:

S3Object targetObject = new S3Object("testcopy.jpg");
s3Service.copyObject(
  BucketName, "test.jpg", 
  "myuniquebucket", targetObject, false);
S3Object newFileObject = s3Service.getObject(
  "myuniquebucket", "testcopy.jpg");

File newFile = new File("src/test/resources/testcopy.jpg");
Files.copy(
  newFileObject.getDataInputStream(), 
  newFile.toPath(), 
  REPLACE_EXISTING);
String origMD5 = getFileMD5("src/test/resources/test.jpg");
String newMD5 = getFileMD5("src/test/resources/testcopy.jpg");
 
assertTrue(origMD5.equals(newMD5));

We can copy objects inside the same bucket, or between two different ones.

If the last argument is true, the copied object will receive new metadata. Otherwise, it will retain the source object’s metadata.

If we want to modify the metadata, we can set the flag to true:

targetObject = new S3Object("testcopy.jpg");
targetObject.addMetadata("My_Custom_Field", "Hello, World!");
s3Service.copyObject(
  "myuniquebucket", "test.jpg", 
  "myuniquebucket", targetObject, true);

6.2. Moving Objects

Objects can be moved to another S3 bucket in the same region.A move operation is a copy then a delete operation.

If the copy operation fails, the source object isn’t deleted. If the delete operation fails, the object will still exist in the source and also in the destination location.

Moving an object looks similar to copying it:

s3Service.moveObject(
  "myuniquebucket",
  "test.jpg",
  "myotheruniquebucket",
  new S3Object("spidey.jpg"),
  false);

6.3. Renaming Objects

JetS3t has a convenience method for renaming objects.  To change an objects name we merely call it with a new S3Object:

s3Service.renameObject(
  "myuniquebucket", "test.jpg", new S3Object("spidey.jpg"));

7. Conclusion

In this tutorial, we used JetS3t to connect to Amazon S3. We created and deleted buckets. Then we added different types of data to buckets and retrieved the data. To wrap things up, we copied and moved our data.

Code samples, as always, can be found over on GitHub.


Viewing all articles
Browse latest Browse all 4536

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>