Hello Developer!

Here, you’ll find everything you need to create great Augmented Reality applications with the Arrakis SDK.

Getting Started

Include Arrakis in your project

The first step is to download the SDK.

Just copy the Arrakis.aar library into the libs folder of your Android Studio project, properly configure gradle to include it: compile fileTree(dir: 'libs', include: 'Arrakis.aar') and enjoy developing interactive Augmented Reality contents!

You might also want to include the Epson Moverio BT200Ctrl.jar library in your project to enable the Steroscopic 3D Mode on your Moverio BT-200.

Examples

You can start learning how to use Arrakis by checking the examples project provided within the SDK package.

Prerequisites

How to build the example project

In order to build the Arrakis example project, you’ll need to follow these steps:

ARGeo Moverio Fragment

The ARGeoMoverioFragment is the entry point of the framework, it is the container in which the Augmented Reality world gets built and handled.

It is an Android Framgent so you can handle it as you normally do with Android Fragments, for example this is how you create it and attach it to an Android View using the FragmentManager:

arGeoMoverioFragment = new ARGeoMoverioFragment();
getFragmentManager().beginTransaction().replace(R.id.container, arGeoMoverioFragment).commit();

ARGeoMoverioFragment takes care of the following tasks:
- handle the motion sensors to detect the orientation of the device
- handle the location provider to detect the geographic position of the device
- handle the AREntities objects which represents Augmented Reality Entities in the world
- hold the 3D and 2D layers where Augmented Reality Entities are drawn
- hold and expose the telemetry data computed by the engine
- apply display filters to the Augmented Reality Entities it contains

Structure

The core of the framework is the ARGeoFragment class, this class implements the base logic of the framework and is extended by ARGeoMoverioFragment and ARGeoMoverioStereoFragment (that will be treated later), the ARGeoMoverioFragment we are now going to exploit is basically an extended ARGeoFragment that meets the Moverio BT-200 field of view.

The structure of a generic ARGeoFragment can be represented as follows:

Lifecycle

ARGeoMoverioFragment lifecycle takes place within the resumed state of the Android Fragment and runs on its own Thread, to interact with it ensuring thread safety the framework provides two callback to be used:

Each one callback is handled by setting the proper listener object to the ARGeoMoverioFragment, the interfaces provided for the listeners are OnARGeoInitializedListener and OnARGeoDrawListener.

This is an example on how listeners gets set to ARGeoMoverioFragment:

arGeoMoverioFragment.setOnARGeoInitializedListener(new ARGeoFragment.OnARGeoInitializedListener() {
@Override
public void onARGeoInitialized(ARGeoFragment arGeoFragment) {
Log.d(TAG, "Do initialization here");
}
});

arGeoMoverioFragment.setOnARGeoDrawListener(new ARGeoFragment.OnARGeoDrawListener() {
@Override
public void onARGeoDraw(ARGeoFragment arGeoFragment) {
Log.d(TAG, "Do runtime stuff here");
}
});

Since onARGeoDraw is fired continuously, it works well when your logic should do something on every frame, but is not good when you need to fire a single time event that needs to be executed just once (for example adding a new AREntity, changing the AbsoluteLocation coordinates of an existing AREntity or rotating an Object3D associated to an AREntity), to do that ARGeoMoverioFragment provides a mechanism similar to the well know Android’s runOnUiThread(Runnable r) mechanism:

Those are three examples on how to run a task on the ARGeoMoverioFragment render Thread:

// Adding a new AREntity
private void spanwAREntity(final AREntity arEntity) {
arGeoMoverioFragment.runOnARGeoThread(new Runnable() {
@Override
public void run() {
arGeoMoverioFragment.addAREntity(arEntity);
}
});
}

// Changing the coordinates of an existing AREntity
private void changeCoordinates(final AREntity arEntity) {
arGeoMoverioFragment.runOnARGeoThread(new Runnable() {
@Override
public void run() {
arEntity.getAbsoluteLocation.setLatitude(40.0);
arEntity.getAbsoluteLocation.setLongitude(-74.0);
arEntity.getAbsoluteLocation.setAltitude(100);
}
});
}

// Rotating an object3D associated to an AREntity
private void rotateObject3D(final AREntity arEntity) {
arGeoMoverioFragment.runOnARGeoThread(new Runnable() {
@Override
public void run() {
arEntity.getObject3D().setRotX(90);
}
});
}

Adding and removing AREntities

You can start adding and removing AREntities in ARGeoMoverioFragment right after the onARGeoInitialized callback is fired, this is why the onARGeoInitialized callback is usually the right place where to create and add your AREntities to ARGeoMoverioFragment.

To add and remove AREntity objects in ARGeoMoverioFragment the following methods are provided:

While the methods removeAREntity(AREntity arEntity) and clearAREntities() are guaranteed to be thread safe and than can be called from any thread, the methods addAREntity(AREntity arEntity) and addAREntities(List arEntities) must be called within the ARGeoMoverioFragment thread, this is easily done by using the runOnARGeoThread(Runnable r) method, also the creation of an Object3D (namely the Object3D object3D = new Object3D() instruction) must occur within the ARGeoMoverioFragment thread, otherwise no Object3D will be shown.

Location callbacks

ARGeoMoverioFragment provides two callbacks related to the location discovery:

Each callback is handled by setting the proper listener object to the ARGeoMoverioFragment, the interfaces provided for the listeners are OnLocationResolvedListener and OnLocationChangedListener.

This is an example on how listeners gets set to ARGeoMoverioFragment:

arGeoMoverioFragment.setOnLocationResolvedListener(new ARGeoFragment.OnLocationResolvedListener() {
@Override
public void onLocationResolved(Location location) {
Log.d(TAG, "Location resolved," +
" Lat: " + location.getLatitude() +
" Lng: " + location.getLongitude() +
" Alt: " + location.getAltitude());
}
});

arGeoMoverioFragment.setOnLocationChangedListener(new ARGeoFragment.OnLocationChangedListener() {
@Override
public void onLocationChanged(Location location) {
Log.d(TAG, "Location changed," +
" Lat: " + location.getLatitude() +
" Lng: " + location.getLongitude() +
" Alt: " + location.getAltitude());
}
});

Mock location

For development purposes ARGeoMoverioFragment allows you to set a mock location, to do it you simply need to call the setMockLocation(Location location) method:

Location mockLocation = new Location("");
mockLocation.setLatitude(40.0);
mockLocation.setLongitude(-74.0);
mockLocation.setAltitude(100);
arGeoMoverioFragment.setMockLocation(mockLocation);

by passing null as argument, ARGeoMoverioFragment will get back to receive location updates from the location provider:

arGeoMoverioFragment.setMockLocation(null);

Filters

ARGeoMoverioFragment enables you to filter the Augmented Reality Entities displayed basing on distance and category criterias.

Distance filter

This filter allows you to show Augmented Reality Entities that happen to be within a specified distance, first of all you need to enable the distance filter for the ARGeoMoverioFragment:

arGeoMoverioFragment.setDistanceFilterEnable(true);

then you can specify the maximum distance (in meters) for visible Augmented Reality Entities:

arGeoMoverioFragment.setDistanceFilter(1000); // 1000 meters

Category filter

This filter allows you to show Augmented Reality Entities basing on a category criteria, first of all you need to enable the category filter for the ARGeoMoverioFragment:

arGeoMoverioFragment.setCategoryFilterEnable(true);

then you shall assign aa arbitrary numeric value that represents a category to your AREntities by using the setCategory(long category) method provided by the AREntity object:

private long CATEGORY_1 = 0;
private long CATEGORY_2 = 1;

AREntity arEntityCat1 = new AREntity(arGeoMoverioFragment);
[build your AREntity]
arEntityCat1.setCategory(CATEGORY_1);

arGeoMoverioFragment.addAREntity(arEntityCat1);

AREntity arEntityCat2 = new AREntity(arGeoMoverioFragment);
[build your AREntity]
arEntityCat2.setCategory(CATEGORY_2);

arGeoMoverioFragment.addAREntity(arEntityCat2);

and finally you can show or hide specific categories to ARGeoMoverioFragment by using the methods:

This is an example on how to do it:

// This hides all the AREntities whose category is CATEGORY_1
arGeoMoverioFragment.removeCategoryFilter(CATEGORY_1);

// This shows all the AREntities whose category is CATEGORY_2
arGeoMoverioFragment.addCategoryFilter(CATEGORY_2);

MaxScalingDistance

In many cases, you don’t want your Augmented Reality Entities to disappear because they are too far from you (of course Augmented Realty entities get scaled down according to the reality), but rather you might want your Augmented Reality Entities to just get scaled down unless they reach a certain distance, and perhaps maintain a visible size even though they get very far from you; this is achieved by calling the setMaxScalingDistance(float distance) on ARGeoMoverioFragment:

arGeoMoverioFragment.setMaxScalingDistance(30); // 30 meters

This way all the Augmented Reality Entities 3D objects and 2D Views will be scaled unless they reach a 30 meters distance and will maintain that size over that distance.

The dafault value for this parameter is 50 (meters).

MotionDetection

ARGeoMoverioFragment implements a motion detection logic based on the Android Motion Sensors, the sensors data is filtred using a low pass filter.

Since the Moverio BT-200 has two gyroscopes, one inside the headset and one inside the device, and since it enables you to switch the used gyroscope even at runtime, you should make sure the selected gyroscope is the headset one (you need to use the BT200Ctrl.jar library to switch between gyroscopes)

ARGeoData

ARGeoData is a data holder exposed by ARGeoMoverioFragment, it holds all the computed data coming from the location provider, to access this data can be sometimes useful for various purposes (for example consulting the corrent location to display it on the screen), you can consult our full API reference for details.

SensorData

SensorData is a data holder exposed by ARGeoMoverioFragment, it holds all the computed data coming from the motion sensors, to access this data can be sometimes useful for various purposes (for example consulting the orientation vector to discover in which direction the user is looking), you can consult our full API reference for details.

Location Provider

By default ARGeoMoverioFragment uses its own LocationProvider that is a GPS based tracker.

If you want to provide your own location using a different strategy, perhaps implementing the Google Location Services API or implementing a QRCode based location discovery or moreover implementing a controller based motion, you can write your own LocationProvider class and set it to ARGeoMoverioFragment by calling the method setLocationProvider(LocationProvider locationProvider) as follows:

LocationProvider myLocationProvider = new MyLocationProvider(arGeoMoverioFragment);

arGeoMoverioFragment.setLocationProvider(myLocationProvider);

A custom LocationProvider must extend the LocationProvider abstract class

Creating a custom LocationProvider

To extend the LocationProvider abstract class you must implement the following methods:

Also you should communicate the location updates provided by your own logic to the framework, this is done by calling the method onProvidedLocationChanged on the LocationProvider’s private member
onProvidedLocationChangedListener.
For example to implement Google Location Services API you should write something like this:

@Override
public void onLocationChanged(Location location) {
onProvidedLocationChangedListener.
onProvidedLocationChanged(location);
}

Notice that the constructor of the LocationProvider abstract class forces you to pass an OnProvidedLocationChangedListener object as argument, this is actually where the OnProvidedLocationChangedListener is taken, just don’t get scared, this is simply needed to explicitly link the LocationProvider to the ARGeoMoverioFragment wich is in fact an OnProvidedLocationChangedListener object already configured to consume the location updates.

To make it simple:
your ARGeoMoverioFragment IS the OnProvidedLocationChangedListener object you shall pass to your LocationProvider constructor (in the most of the cases).
For example if you implemented a Google Location Services API LocationProvider you shall instantiate it like this:

LocationProvider googleLocationProvider = new GLSLocationProvider(arGeoMoverioFragment);
arGeoMoverioFragment.setLocationProvider(googleLocationProvider);

To clarify the meaning of this:

An odd example: NullLocationProvider

An example of NOT using the OnProvidedLocationChangedListener is the following: you need to build an Augmented Reality user interface in which you are not going to spawn Augmented Reality Entities at a specific geographic location but instead you are going to position them just relatively to the user, and thus you don’t need to receive location updates, in fact you don’t need to waste resources for tracking your location either.

In this case the NullLocationProvider (an empty LocationProvider that does nothing) can fit your purpose:

public class NullLocationProvider extends LocationProvider {
public NullLocationProvider() {
super(null);
}

@Override
public void initLocationProvider(Context context) {
}

@Override
public void startLocationTracking() {
}

@Override
public void stopLocationTracking() {
}
}

By using this example as ARGeoMoverioFragment’s LocationProvider no location tracking will be started.

In fact you don’t need to actually implement the NullLocationProvider ARGeoMoverioFragment lets you set the NullLocationProvider automatically by calling the method: arGeoMoverioFragment.setNullLocationProvider();

More LocationProvider examples

Check the ArrakisExampleProject for examples about how to create a custom LocationProvider

AREntities

The AREntities object held by ARGeoMoverioFragment is the logical container of all the AREntity objects belonging to it.

It is in fact an extended ArrayList that imlpements useful methods to retrieve a specific AREntity starting from the View, Object3D, LocationAbsolute or LocationRelative object belonging to the AREntity.

This object shall not be accessed directly when adding or removing AREntity objects but instead you should use the ARGeoMoverioFragment’s methods: setAREntities(AREntities arEntities), addAREntity(AREntity arEntity), removeAREntity(AREntity arEntity), clearAREntities().

AREntity arEntity = new AREntity(arGeoMoverioFragment);
[build your AREntity]
arGeoMoverioFragment.addAREntity(arEntity);

[...]

arGeoMoverioFragment.removeAREntity(arEntity);

AREntity

The AREntity object represents an Augmented Reality entity located somewhere in the world.

The AREntity is composed by the following elements:

This class also holds several entity specific configuration parameters.

When you create a new AREntity you shall pass the ARGeoMoverioFragment you are going to add the AREntity to as the constructor argument:

AREntity arEntity = new arEntity(arGeoMoverioFragment);

Then you can set the components required by the entity like this:

arEntity.setView(arView);
arEntity.setObject3D(object3D);
arEntity.setLocationAbsolute(locationAbsolute);
arEntity.setLocationRelative(locationRelative);

or you can use the full constructor:

AREntity arEntity = new arEntity(arGeoFragment,
locationAbsolute,
locationRelative,
object3D,
arView);

Entity type

With the entity type property we define how the AREntity will behave in terms of positioning, two possible types are available:

The default value for this property is AREntity.ENTITY_ABSOLUTE

Absolute Location Entity

The Absolute Location Entity is an Augmented Reality entity that is positioned in a specific geographic location in the world identified by world coordinates (Latitude, Longitude, Altitude).

It is defined by setting the AREntity.ENTITY_ABSOLUTE flag, like this:

arEntity.setEntityType(AREntity.ENTITY_ABSOLUTE);

When the user changes his location the Absolute Location Entity will remain in the same geographic position set to it.

The Absolute Location Entity geographic position is defined by the LocationAbsolute object belonging to the AREntity, in this case the object LocationRelative (if set) will be ignored.

This is the type you should choose when implementing geolocation based Augmented Reality contents.

Relative Location Entity

The Relative Location Entity is an Augmented Reality entity positioned relatively to the user position, that can be defined with both cartesian (X, Y, Z) or spherical (Radius, Inclination, Azimuth) coordinates.

It is defined by setting the AREntity.ENTITY_RELATIVE flag, like this:

arEntity.setEntityType(AREntity.ENTITY_RELATIVE);

When the user changes his location the Relative Location Entity will move with the user keeping its position relative to the user.

The Relative Location Entity position is defined by the LocationRelative object belonging to the AREntity, in this case the object LocationAbsolute (if set) will be ignored.

This is the type you should choose when implementing and Augmented Reality User Interfaces.

View rotation type

The View rotation type property defines how 2D Android Views are rotated on the screen to counterbalance the roll of the device, three possible types are available:

The default value for this property is AREntity.ROTATION_FULL

No Rotation

With this configuration the 2D View will not be rotated on device roll, so it will always be oriented according to the device roll.

It is defined by setting the AREntity.ROTATION_NONE flag, like this:

arEntity.setView RotationType(AREntity.ROTATION_NONE);

Axis Rotation

With this configuration the View will be rotated only on 90 degrees basis, trying to match the nearest rotation to counterbalance the device roll (within 0°, 90°, 180° and 270°), in simple terms the View rotation will behave like the Android screen rotation.

It is defined by setting the AREntity.ROTATION_AXIS flag, like this:

arEntity.setView RotationType(AREntity.ROTATION_AXIS);

Full Rotation

With this configuration the View will be rotated with the best precision to counterbalance the device roll, by rolling the device the View will be rotated on the screen to maintain their straight position according the the reality.

It is defined by setting the AREntity.ROTATION_FULL flag, like this:

arEntity.setView RotationType(AREntity.ROTATION_FULL);

This is the preferred choice to make Augmented Reality Views because of the fancy effect it gives to Views making them more contextual to the reality, but it has a significative performance impact if compared with the other two configurations, if you plan to have a lot of AREntities (>100) with complicated Views layouts on screen symultaneously you’d rather consider another configuration.

Category

This method lets you set an arbitrary category to the AREntity, the category will then be used when applying a category filter trough the ARGeoMoverioFragment category filter feature.

The category can be any long number, you can set it like this:

arEntity.setCategory(5);

And then you can apply the category to ARGeoMoverioFragment to show or hide AREntities belonging to a certain category like this:

// Remember to enable the category filter before using it
arGeoMoverioFragment.setCategoryFilterEnable(true);

// this will show all the AREntities whose category is 5
arGeoMoverioFragment.addCategoryFilter(5);

Remember that the categories included in the filter gets show while the excluded categories gets hide.

View

Almost any Android View object can be used to represent an Augmented Reality Entity in the reality space, setting an Android View to be an Augmented Reality Entity will result in having your view floating in the air at the absolute or relative location you specified in your AREntity.

The Augmented Reality Views that gets shown are not mere graphic elements but actual Android Views, that maintain their structure, their properties as well as their interactivity, this means that you will be able to edit them as you please using normal Android SDK methods or click a button and actually trigger its OnClickListener or swipe a SeekBar triggering its OnSeekBarChangeListener, also the reference in your code to your View object is maintained.

Here’s an example on how to set a View to an AREntity, for instance a LinearLayout object:

First we inflate a View from an xml layout:

LinearLayout myLinearLayout = (LinearLayout) getLayoutInflater().inflate(R.layout.my_ar_widger, arGeoMoverioFragment.getRootView(), false);

Note that inflating a layout for ARGeoMoverioFragment requires these two conditions to be fulfilled:

  • Pass the rootView of arGeoMoverioFragment as the root parameter
  • Set the attachToParent flag to false

Then we set the View to the desired AREntity

arEntity.setView(myLinearLayout);

The AREntity can be now added to ARGeoMoverioFragment to show your View as an Augmented Reality element (of course you still have to set a location to it).

View scaling

The Views shown as Augmented Reality element will naturally scale their size according to their distance, just like how happens with real objects.

If you want to inhibit this behavior you can set the AREntity scale enable flag to false:

arEntity.setViewScaleEnable(false);

This way the view will always maintain its original size.

The default value for this property is true

Base depth

This parameter basically defines how big an Android View becomes when it is projected in the real world.

More precisely the View base depth defines the distance (in meters) from the user at which the Android View is shown with a 100% scale size, when the AREntity gets closer than this distance the View will be scaled up while when the AREntity gets farther than this distance the View scale will be scaled down according to the perspective distance perception.

You can set it this way:

arEntity.setViewBaseDepth(3); // 3 meters

The default value for this property is 5 (meters)

Object3D

ARGeoMoverioFragment implements a 3D environment using the Rajawali library v0.9, the Object3D you set to an AREntity is in fact a Rajawali Object3D, while ARGeoMoverioFragment takes care of positioning your 3D object in the reality you can handle the 3D object behavior using the Rajawali APIs

Just to get started here’s an example on how to create an Object3D and add it to ARGeoMoverioFragment:

arGeoMoverioFragment.runOnARGeoThread(new Runnable(){
@Override
public void run() {
Object3D cube = new Cube(2);
Material material = new Material();
material.setColor(Color.GREEN);
cube.setMaterial(material);
arEntity.setObject3D(cube);
}
});

You can also load your own 3D model from the raw folder of your project:

arGeoMoverioFragment.runOnARGeoThread(new Runnable(){
@Override
public void run() {
ARGeoRenderer arGeoRenderer = arGeoMoverioFragment.getARGeoRenderer();
LoaderOBJ loaderObj = new LoaderOBJ(getResources(), arGeoRenderer.getTextureManager(), R.raw.my_3d_model_obj);

try {
loaderObj.parse();
Object3D object3D = loaderObj.getParsedObject();
arEntity.setObject3D(object3D);
} catch (ParsingException e) {
e.printStackTrace();
}
}
});

The following file formats for 3D models are supported in Rajawali: .FBX .OBJ .AWD .MD2 .GCode .STL .MD5Mesh

Remember that the creation of an Object3D (namely the Object3D object3D = new Object3D() instruction) must occur within the ARGeoMoverioFragment thread, otherwise no Object3D will be shown.

Location Absolute

The LocationAbsolute is an object that represents a specific geographic location in the world identified by world coordinates (Latitude, Longitude, Altitude).

To set it up you just need to create it, set the desired Latitude, Longitude and Altitude, and set it to your AREntity:

LocationAbsolute locationAbsolute = new LocationAbsloute();
locationAbsolute.setLatitude(40.721529);
locationAbsolute.setLongitude(-74.005774);
locationAbsolute.setAltitude(100);

arEntity.setLocationAbsolute(locationAbsolute);

You can also directly set an Android Location object:

locationAbsolute.setLocation(myAndroidLocation);

Remember that if you set your AREntity type to be Relative the LocationAbsolute will be ignored

LocationRelative

The LocationAbsolute is an object that represents a spatial position relatively to the user position, that can be defined with both cartesian (X, Y, Z) and spherical (Radius, Inclination, Azimuth) coordinates.

To set it up you just need to create it, set the desired cartesian or spherical coordinates, and set them to your AREntity, this is how you set cartesian coordinates:

LocationRelative locationRelative = new LocationAbsloute();

// This sets the location to be at:
//   1 meter north
//   0.3 meters up
//   2 meters east
locationRelative.setCartesianCoordinates(1, 0.3, -2);

arEntity.setEntityType(AREntity.ENTITY_RELATIVE);
arEntity.setLocationRelative(locationRelative);

and this is how you set spherical coordinates:

LocationRelative locationRelative = new LocationAbsloute();

// This sets the location to be at:
//   2 meters radius
//   3/8 * PI inclination (a bit over the horizon)
//   1/4 * PI azimuth (facing north-east)
locationRelative.setCartesianCoordinates(1, 3/8 * Math.PI, 1/4 * Math.PI);

arEntity.setEntityType(AREntity.ENTITY_RELATIVE);
arEntity.setLocationRelative(locationRelative);

remember to set your AREntity type to be Relative, otherwise the LocationRelative will be ignored

Cartesian coordinate system

The World Coordinates System axis are mapped to the Android Coordinates System considering the default device position to be facing straight to the north, thus the world geographic coordinates are mapped to Android XYZ coordinates as follows:

When setting cartesian coordinates consider the unit (1) as 1 meter.

Spherical coordinate system

The sperical coordinates system defines a point in the space by specifying the radius, inclination and azimuth from a center position (the user in our case).

Hands free interaction

Arrakis provides a strategy to let you interact with the Augmented Reality Entities without the need of using hands but instead by pointing them with a crosshair you can move by using your head movement.

Basically what you have is crosshair with a fixed position on your screen (can be any Android View) that you can move relatively to the AREntities by moving your head, it looks something like this:

To implement this kind of interaction you need to follow these steps:

GazeLayout

The GazeLayout is an extended FrameLayout that implements the Gaze interface, can wrap any other Android Layout and reacts to the following events:

Here’s an example of creating a GazeLayout in your layout xml:

<net.joinpad.arrakis.interaction.GazeLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content">

<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="I'm a TextView inside an GazeLayout" />

</net.joinpad.arrakis.interaction.GazeLayout>

To handle what to do when one of these events occur you shall set the proper listeners to the GazeLayout.

GazeListener

The GazeListener enables you to define the callbacks fired when gaze events occurs:

This is an example on how to implement GazeLayout callbacks:

gazeLayout.setGazeListener(new GazeListener() {
@Override
public boolean onGazeOn(final View v) {
Log.d(TAG, "onGazeOn triggered");
return true;
}

@Override
public boolean onGazeHover(final View v) {
Log.d(TAG, "onGazeHover triggered");
return true;
}

@Override
public boolean onGazeOut(final View v) {
Log.d(TAG, "onGazeOut triggered");
return true;
}s

@Override
public boolean onGazeClick(final View v) {
Log.d(TAG, "onGazeClick triggered");
return true;
}
});

LongGazeListener

The GazeListener enables you to define the callback fired when a long gaze event occurs:

gazeLayout.setLongGazeListener(new LongGazeListener() {
@Override
public boolean onLongGaze(final View v) {
Log.d(TAG, "onLongGaze triggered");
return true;
}
});

The onLongGaze(View v) callback if fired after a specified delay, by default the delay is 1000 milliseconds however it can be set by calling setLongGazeDelay(int longGazeDelay) on the GazeLayout or by specifying into xml using the custom attribute “longGazeDelay”, here’s an example of setting the delay in your xml layout:

<net.joinpad.arrakis.interaction.GazeLayout
android:id="@+id/my_gazelayout"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:longGazeDelay="500">

[Your layout content]

</net.joinpad.arrakis.interaction.GazeLayout>

Enable and disable gaze interaction on a specific element

To enable and disable the gaze interactions on a specific GazeLayout you can set its gaze state by calling the method enableGaze() and disableGaze() like this:

// This disables the gaze interactions on myGazeLayout
myGazeLayout.disableGaze();

[...]

// This enables the gaze interactions on myGazeLayout
myGazeLayout.enableGaze();

The default value for this property is true

Trigger an arbitrary gaze event

The gaze handler provides a mechanism to trigger an arbitrary event on the current GazeLayout aimed by the crosshair, this is done by calling ARGeo method gazeClick().

myARGeoFragment.gazeClick();

when gazeClick() is called if there is a registered and enabled GazeLayout that is aimed by the crosshair, the GazeListener.onGazeClick(View v) will be fired on the aimed GazeLayout.

Hardware Keys Handler

The HardwareKeysHandler is an object meant to wrap the hardware keys behavior of your Moverio BT-200 divice, more precisely it is meant to define various sets of hardware keys behavior combinations, so that each combination can be assigned to all the hardware keys in just one shot.

Of course it’s possible to remap the Moverio BT-200 hardware keys in the traditional way (by overriding onKeyUp/onKeyDown in Activity) this wrapper just gives a simple interface to do it with a better order.

The hardware keys that can be remapped on the Moverio BT-200 are:

This is an example on how to create an HardwareKeysHandler that handles the KeyPress interactions for all the remappable keys:

HardwareKeysHandler myHardwareKeyPressHandler = new HardwareKeysHandler() {
@Override
public boolean onKeyVolumeUp(int keyCode, KeyEvent keyEvent) {
Log.d(TAG, "Volume Up Key have been Pressed");
return true;
}

@Override
public boolean onKeyVolumeDown(int keyCode, KeyEvent keyEvent) {
Log.d(TAG, "Volume Down Key have been Pressed");
return true;
}

@Override
public boolean onKeyBack(int keyCode, KeyEvent keyEvent) {
Log.d(TAG, "Back Key have been Pressed");
return true;
}

@Override
public boolean onKeyMenu(int keyCode, KeyEvent keyEvent) {
Log.d(TAG, "Menu Key have been Pressed");
return true;
}
};

Now to use the HardwareKeysHandler you must explicitly call its method handleKeyEvent(int keyCode, KeyEvent keyEvent) from within the overridden onKeyDown(int keyCode, KeyEvent event) Android method (just as you normally do when overriding hardware keys behavior):

@Override
public boolean onKeyDown(int keyCode, KeyEvent event) {
return myHardwareKeyPressHandler.handleKeyEvent(keyCode, event);
}

A better implementation could be done like this:

@Override
public boolean onKeyDown(int keyCode, KeyEvent event) {
boolean handled = false;
if (myHardwareKeyPressHandler != null) {
handled = myHardwareKeyPressHandler.handleKeyEvent(keyCode, event);
}
if (!handled) {
handled = super.onKeyUp(keyCode, event);
}
return handled;
}

Notice that an instance of HardwareKeysHandler represents an hardware keys configuration to handle the KeyDown events OR the KeyUp events, this means that if you alse want to handle all KeyUp events, you have to implement another instance of HardwareKeysHandler, and use them both:

HardwareKeysHandler myHardwareKeyReleaseHandler = new HardwareKeysHandler() {
@Override
public boolean onKeyVolumeUp(int keyCode, KeyEvent keyEvent) {
Log.d(TAG, "Volume Up Key have been Released");
return true;
}

@Override
public boolean onKeyVolumeDown(int keyCode, KeyEvent keyEvent) {
Log.d(TAG, "Volume Down Key have been Released");
return true;
}

@Override
public boolean onKeyBack(int keyCode, KeyEvent keyEvent) {
Log.d(TAG, "Back Key have been Released");
return true;
}

@Override
public boolean onKeyMenu(int keyCode, KeyEvent keyEvent) {
Log.d(TAG, "Menu Key have been Released");
return true;
}
};

@Override
public boolean onKeyDown(int keyCode, KeyEvent event) {
boolean handled = false;
if (myHardwareKeyReleaseHandler != null) {
handled = myHardwareKeyPressHandler.handleKeyEvent(keyCode, event);
}
if (!handled) {
handled = super.onKeyUp(keyCode, event);
}
return handled;
}

@Override
public boolean onKeyUp(int keyCode, KeyEvent event) {
boolean handled = false;
if (myHardwareKeyReleaseHandler != null) {
handled = myHardwareKeyReleaseHandler.handleKeyEvent(keyCode, event);
}
if (!handled) {
handled = super.onKeyUp(keyCode, event);
}
return handled;
}

A suggested usage for HardwareKeysHandler is to use it to make some hardware key trigger the gazeClick() method on your ARGeo entry point:

HardwareKeysHandler myHardwareKeyPressHandler = new HardwareKeysHandler() {
@Override
public boolean onKeyVolumeUp(int keyCode, KeyEvent keyEvent) {
// We call the gazeClick() method on volume up key press
myARGeoFragment.gazeClick();
return true;
}
};

Stereoscopy

The Moverio BT-200 features a Stereoscopic 3D Mode that can be used to create stereoscopic contents, using stereoscopy in developing Augmented Reality is not just a fancy feature that makes your contents look prettier, but is a mandatory need if you want your contents to be properly contextualized within the reality.

What the Moverio BT-200 Stereoscopic 3D Mode does is in fact just a switch of the way the Android screen is projectes on the SmartGlasses: while using normal 2D Mode the device projects the same screen on both lenses, the Stereoscopic 3D Mode extends the screen on both lenses by projecting the left half of the screen just on the left lens and the right half of the screen just on the right lens, as shown below:

The chance to project a different content on each one lens is the key to produce Stereoscopic 3D contents, check out Wikipedia for more information about Stereoscopy

By using the Stereoscopic 3D Mode we must make some considerations:

Enable 3D Mode on Moverio BT-200

To enable the Stereoscopic 3D Mode you need to use the Epson Moverio BT200Ctrl.jar library so first of all download it, place it under the libs folder of your Android Studio project, and properly configure gradle to include it:

compile fileTree(dir: 'libs', include: 'BT200Ctrl.jar')

Once your the library is properly included into the project you will be use the DisplayControl object to toggle the Stereoscopic 3D Mode:

displayControl = new DisplayControl(MyActivity.this);
displayControl.setMode(DisplayControl.DISPLAY_MODE_3D, true);

It is suggested that you toggle the Stereoscopic 3D Mode on Activity (or Fragment) onResume() and toggle back the 2D Mode on Activity (or Fragment) onPause():

@Override
protected void onResume() {
super.onResume();
mDisplayControl.setMode(DisplayControl.DISPLAY_MODE_3D, true);
}

@Override
protected void onPause() {
super.onPause();
mDisplayControl.setMode(DisplayControl.DISPLAY_MODE_2D, true);
}

Hide the System Bar

The first thing that comes into sight when toggling the Stereoscopic 3D Mode is that the Android’s System Bar gets messed up: it gets draw the same way it is when using 2D Mode, but due to the screen extention provided by the Stereoscopic 3D Mode you will see each one half of the System Bar drawn in each one lens overlapping the other one, resulting in a very ugly visual effect.

It’s common knowledge that removing the System Bar on Android 4.0.3 is not an easy task, expecially on not rooted device, that’s why Arrakis comes with an helper class specifically designed for this purpose: the SystemCtrlBars class.

This is how you hide the System Bar:

SystemBarsCtrl.hide(myActivity.getWindow());

and this is how you show it again:

SystemBarsCtrl.show(myActivity.getWindow());

This method can be called anywhere within the Activity (or Fragment) lifecycle, but it is suggested to toggle it inside the onResume() method:

@Override
protected void onResume() {
super.onResume();
SystemBarsCtrl.hide(myActivity.getWindow());
}

Any normal Android Activity will try by default to toggle the System Bar visibility to be true if it is not, so you normally don’t need to manually restore the System Bar when leaving an Activity that has previously hidden it, it will just be automatically restored.

ARGeo Moverio Stereo Fragment

Now that we know how to prepare the environment to optimally host Stereoscopy it’s time to learn how to create Augmented Reality Stereoscopic contents.

Stereoscopic contents can be created in Arrakis by using another extended version of ARGeoFragment, the ArgetoMoverioStereoFragment.

The ARGeoMoverioStereoFragment behaves just like the regular ARGeoMoverioFragment, but it automatically creates a stereoscopic environment fully compatible with the Moverio BT-200 Stereoscopic 3D Mode, making the 3D models as well as 2D Views appear to have a perspective depth and to be positioned at a certain distance from the observed within the reality.

You can create and attach an ARGeoMoverioStereoFragment to a layout just like you do with the regular ARGeoMoverioFragment:

arGeoMoverioStereoFragment = new ARGeoMoverioStereoFragment();
getFragmentManager().beginTransaction().replace(R.id.container, arGeoMoverioStereoFragment).commit();

Everything discussed about the ARGeoMoverio fragment also applies to the ARGeoMoverioStereoFragment, except for the following difference:

Every configuration and behavior of ARGeoMoverioFragment also applies to ARGeoMoverioStereoFragment so just reference to ARGeoMoverioFragment usage for details on how to create and configure your Augmented Reality environment.

Pupil distance in stereoscopy

The pupil distance is a key feature when implementing stereoscopy, the reason is that different subjects usually have different distances between their pupils, and the stereoscopic depth perceived significantly changes between a 5.0cm pupil distance and a 7.0cm pupil distance for example.

To make it simple, a stereoscopic element drawn on the glasses that is perceived at 2 meters depth by a subject with a 6.4cm pupil distance will be perceived to be closer by a subject with a 7.4 pupil distance (like if the element is placed at 1.7 meters depth) and will be perceived to be farther by a subject with a 5.4 pupil distance (like if the element is placed at 2.3 meters depth).

To handle this scenarios ARGeoMoverioStereoFragment provides a method to configure an arbitrary pupil distance, it is done by calling the method setPupilDistance(float pupilDistance) like this:

// This sets the pupil distance at 0.068 meters, or 6.8 centimeters
myARGeoMoverioStereoFragment.setPupilDistance(0.068)

By doing this ARGeoMoverioStereoFragment will automatically correct the stereoscopic perception to make different pupil distance subjects to perceive the Augmented Reality elements properly.

Keep in mind that the pupil distance must be expressed in meters, so for example 6.8cm is expressed as 0.068.

You can measure your pupil distance by using the pupil distance setup example contained in the Arrakis Example Projects.

The default value for this property is 0.064 meters, this is also the average pupil distance for humans

Stereoscopic layouts

Congratulations, you now know everything you need to produce stereoscopic augmented reality contents using Arrakis!

Below there is a more technical overview about how stereosopy is implemented in Arrakis.

While all the 3D contents in ARGeoMoverioStereoFragment are made stereoscopic by simply taking 2 camera renders from the 3D environment and projecting each one on each one lens of the device, the 2D Android Views are made using some custom tools that are part of Arrakis.

If you want to learn how things are done under the hood you might want to learn about the stereoscopic layouts, a powerful set of tools provided by Arrakis that is silently used by ARGeoMoverioStereoFragment to make Android Native Views to be stereoscopic, and can be used even outside the ARGeoMoverioStereoFragment to create for example stereoscopic interfaces.

First of all consider that the BT200Ctrl.jar library technically enables us to produce stereoscopic contents but it doesn’t help at all in doing it, in fact what you are theorically supposed to do to implement stereoscopy using android Views is to clone each one View you want to be stereoscopic, show it in both the screen halves, and play with the horizontal offset to make the view appear to be stereoscopically closer or farther.

It’s an easy guess that doing this in an actual app will drive any developer crazy, expecially considering android Views cannot be cloned.

Arrakis takes care of this and thus provides some specific components to make this task easy.

StereoLayout

The StereoLayout is the main component needed to implement stereoscopy in the easy way.

It represents a phisycal perspective plan placed at an arbitrary distance from the observer, the distance is defined in meters by the depth attribute, all the Views that are contained inside it will automatically be graphically cloned and placed on each one half of the screen, also they will be horizontally scaled to compensate the screen stretching and offsetted to match the correct perspective specified distance.

If you draw an android layout inside a StereoLayout like this:

this is what gets actually drawn on the Android screen:

this is what gets projected on tha Moverio BT-200 lenses:

and this is what gets perceived by the observer:

The StereoLayout saves a lot of work in terms of interface design, the following are the main features that characterize it:

Usage

A StereoLayout is in fact an extended FrameLayout, so it inherits its behavior when adding children to it, a simple StereoLayout can be wrote as follows:

<net.joinpad.arrakis.layout.StereoLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
app:depth="5.0">

<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical">

<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="This is TextView"/>

<ImageView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:src="@drawable/my_image"/>

</LinearLayout>

</net.joinpad.arrakis.layout.StereoLayout>

Specify the Depth

Notice how in the previous example the depth of the StereoLayout is directly specified in the xml by using the attribute depth, there are two ways to assign a depth to a StereoLayout:

As mentioned before the depth can be changed at realtime, even using animations, this is how you animate the depth of a StereoLayout making it move from 7.5 meters to 3.0 meters using an AccelerateDecelerateInterpolator and taking 1500 milliseconds:

myStereoLayout.animateDepth(7.5,
3.0,
new AccelerateDecelerateInterpolator(),
1500);

The animateDepth() method provides several signatures to be used.

Interactivity

The interactivity of all the Views is maintained after they are rendered in stereoscopic mode, this means you can use the Android cursor to interact with them just like you do in the normal environment.

Consider that what happens when activating the Moverio BT-200 Stereoscopic 3D Mode is that the screen just gets extended over both lenses, since the Android cursor still acts in the same way, you will basically see it sretched with just one eye at time and by moving it from left to right, when you first reach the right border of the screen the cursor will pass from your left eye to your right, making it looks like to appear again in your left border.
The following image shows how the cursor behaves in 3D Stereoscopic Mode:

The good news is that you can still use the cursor to interact to stereoscopic layout, in simple terms if you have a button, that by stereoscopy is rightly doubled in each one lens, you can click any one of the 2 graphic instances and the interaction will be triggered, no matter which one you clicked.

StereoGroup

The StereoGroup is a container for StereoLayouts, it is needed as it does part of the job necessary to make the stereoscopy to work.

A StereoGroup is normally used as the root element of the layout, this is not mandatory but consider that making it smaller than the entire screen makes no sense, it can contain any number of StereoLayouts.

Keep in mind this two facts:

So your stereoscopic layout xml will usually look like this:

<net.joinpad.arrakis.layout.StereoGroup
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">

<net.joinpad.arrakis.layout.StereoLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
app:depth="3.0">

<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="This is a TetxView at 3 meters depth" />

</net.joinpad.arrakis.layout.StereoLayout>

<net.joinpad.arrakis.layout.StereoLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
app:depth="5.0">

<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="This is a TetxView at 5 meters depth" />

</net.joinpad.arrakis.layout.StereoLayout>
</net.joinpad.arrakis.layout.StereoGroup>

The StereoGroup doesn’t usually need any particular configuration to work.

Z-Order Handling

Usually Android Layouts doesn’t privode a z-order handling, the layouts contained into a layout group, if they overlaps each other, are usually sorted trought the z-axis by putting the latest layout added (or declared in xml) on the top of the previously added layouts.

This is not true for the StereoLayouts contained into a StereoGroup: the StereoLayouts gets automatically their z-order sorted by taking to the top the closest layout in terms of distance (or depth), this is useful as this way closest views are displayed on the top, just as it happens in the reality. This also applies to realtime depth changes: if any StereoLayout changes its depth and gets closer (or farther) respect to another overlapping StereoLayout their z-order will be automatically handled to display the closest StereoLayout on the top.

For instance, in the previous example the 3 meters depth StereoLayout will be shown on the top of the 5 meters depth StereoLayout despite the fact that the 5 meters depth StereoLayout is declared after the 3 meters depth one.

The StereoGroup is the component in charge of orchestrate this behavior, this feature can be disabled by calling the method setHandleZOrder(boolean handleZOrder):

myStereoGroup.setHandleZOrder(false)

The default value for this property is true

StereoToast message

The StereoToast message is an extend Toast message that implements stereoscopy and thus can be used within the Moverio BT-200 Stereoscopic 3D Mode with a nice graphic result.

It is handled in the same way you normally handle Toast messages, except for the fact that its stereoscopic depth can be specified. Here’s an example on how to trigger a StereoToast message:

// This shows a stereoscopic Toast message at 2.5 meters depth.
StereoToast.makeText(MyActivity.this, "This is a StereoToast", StereoToast.LENGTH_LONG, 2.5f).show();

The makeText() method provides several signatures to be used.

The default value for the depth property is 3 meters

StereoCore

The StereoCore is not a layout, but is a static component that performs under the hood all the math calculations needed to produce stereoscopy.

The intresting part about this component is that it is the component in which you specify the pupil distance that will be applied to all the StereoLayouts belonging to your application.

You can specify the pupil distance like this:

// This sets the pupil distance at 0.068 meters, or 6.8 centimeters
StereoCore.setPupilDistance(0.068);

Note that you don’t have to do this when using ARGeoMoverioStereoFragment’s setPupilDistance(), as the ARGeoMoverioStereoFragment’s setPupilDistance() method already triggers StereoCore’s setPupilDistance()12