All our knowledge about augmented reality is taken from games, science-fiction films and AR-apps for smartphones. Essentially, it is about extending the original content with additional visual elements (known as “augmentations”). The position of an augmentation and its life-cycle may be anchored to its target content object or the view.
Content Type: video stream, street view, geo data Augments: map, navigation widgets Anchoring: geo points
The most challenging process is a parsing of the raw source data into structured content and identifying its content objects (like geo points in the example above). We then need to embed augmentations into the view in the right place.
In the example above the web page is the content. Its elements are content objects. Embedded buttons and other widgets are augmentations. Let's look at the social feed page and possible augmentations for it:
- Exchange rates widget;
- Custom filters controls;
- Personal ads;
- Crypto accounts linked to social profile (connected accounts data);
- Additional inserts;
- Additional buttons;
- Profile badges;
- New functional starters;
- Background themes and colors;
- Hidden and paywall content;
- Community stickers;
- NFT profile pictures;
- Visual frames and selections;
- Community post from other social media
- and many others.
A parsing of the content means parsing of the DOM, and embedding means inserting a widget's HTML into it. All augmentation widgets are organized as AugmentationDOM, which lays over the original content’s DOM.
A web page can still be quite dynamic: content objects may appear and disappear at any time or move around the page (i.e. by dynamic loading on scrolling). Augmentations should follow their linked content objects.
Although the implementation of the Dapplets Protocol is heavily optimized for the web, the protocol itself is web agnostic and was developed with a variety of content types in mind.