Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-ionic-components
Packt
08 Jun 2017
16 min read
Save for later

Ionic Components

Packt
08 Jun 2017
16 min read
In this article by Gaurav Saini the authors of the book Hybrid Mobile Development with Ionic, we will learn following topics: Building vPlanet Commerce Ionic 2 components (For more resources related to this topic, see here.) Building vPlanet Commerce The vPlanet Commerce app is an e-commerce app which will demonstrate various Ionic components integrated inside the application and also some third party components build by the community. Let’s start by creating the application from scratch using sidemenu template: You now have the basic application ready based on sidemenu template, next immediate step I took if to take reference from ionic-conference-app for building initial components of the application such aswalkthrough. Let’s create a walkthrough component via CLI generate command: $ ionic g page walkthrough As, we get started with the walkthrough component we need to add logic to show walkthrough component only the first time when user installs the application: // src/app/app.component.ts // Check if the user has already seen the walkthrough this.storage.get('hasSeenWalkThrough').then((hasSeenWalkThrough) => { if (hasSeenWalkThrough) { this.rootPage = HomePage; } else { this.rootPage = WalkThroughPage; } this.platformReady(); }) So, we store a boolean value while checking if user has seen walkthrough first time or not. Another important thing we did create Events for login and logout, so that when user logs into the application and we can update Menu items accordingly or any other data manipulation to be done: // src/app/app.component.ts export interface PageInterface { title: string; component: any; icon: string; logsOut?: boolean; index?: number; tabComponent?: any; } export class vPlanetApp { loggedInPages: PageInterface[] = [ { title: 'account', component: AccountPage, icon: 'person' }, { title: 'logout', component: HomePage, icon: 'log-out', logsOut: true } ]; loggedOutPages: PageInterface[] = [ { title: 'login', component: LoginPage, icon: 'log-in' }, { title: 'signup', component: SignupPage, icon: 'person-add' } ]; listenToLoginEvents() { this.events.subscribe('user:login', () => { this.enableMenu(true); }); this.events.subscribe('user:logout', () => { this.enableMenu(false); }); } enableMenu(loggedIn: boolean) { this.menu.enable(loggedIn, 'loggedInMenu'); this.menu.enable(!loggedIn, 'loggedOutMenu'); } // For changing color of Active Menu isActive(page: PageInterface) { if (this.nav.getActive() && this.nav.getActive().component === page.component) { return 'primary'; } return; } } Next we have inside our app.html we have multiple <ion-menu> items depending upon whether user is loggedin or logout: // src/app/app.html<!-- logged out menu --> <ion-menu id="loggedOutMenu" [content]="content"> <ion-header> <ion-toolbar> <ion-title>{{'menu' | translate}}</ion-title> </ion-toolbar> </ion-header> <ion-content class="outer-content"> <ion-list> <ion-list-header> {{'navigate' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of appPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> <ion-list> <ion-list-header> {{'account' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of loggedOutPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> <button ion-item menuClose *ngFor="let p of otherPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> </ion-content> </ion-menu> <!-- logged in menu --> <ion-menu id="loggedInMenu" [content]="content"> <ion-header> <ion-toolbar> <ion-title>Menu</ion-title> </ion-toolbar> </ion-header> <ion-content class="outer-content"> <ion-list> <ion-list-header> {{'navigate' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of appPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> <ion-list> <ion-list-header> {{'account' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of loggedInPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> <button ion-item menuClose *ngFor="let p of otherPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> </ion-content> </ion-menu> As, our app start mainly from app.html so we declare rootPage here: <!-- main navigation --> <ion-nav [root]="rootPage" #content swipeBackEnabled="false"></ion-nav> Let’s now look into what all pages, services, and filter we will be having inside our app. Rather than mentioning it as a bullet list, the best way to know this is going through app.module.ts file which has all the declarations, imports, entryComponents and providers. // src/app/app.modules.ts import { NgModule, ErrorHandler } from '@angular/core'; import { IonicApp, IonicModule, IonicErrorHandler } from 'ionic-angular'; import { TranslateModule, TranslateLoader, TranslateStaticLoader } from 'ng2-translate/ng2-translate'; import { Http } from '@angular/http'; import { CloudSettings, CloudModule } from '@ionic/cloud-angular'; import { Storage } from '@ionic/storage'; import { vPlanetApp } from './app.component'; import { AboutPage } from '../pages/about/about'; import { PopoverPage } from '../pages/popover/popover'; import { AccountPage } from '../pages/account/account'; import { LoginPage } from '../pages/login/login'; import { SignupPage } from '../pages/signup/signup'; import { WalkThroughPage } from '../pages/walkthrough/walkthrough'; import { HomePage } from '../pages/home/home'; import { CategoriesPage } from '../pages/categories/categories'; import { ProductsPage } from '../pages/products/products'; import { ProductDetailPage } from '../pages/product-detail/product-detail'; import { WishlistPage } from '../pages/wishlist/wishlist'; import { ShowcartPage } from '../pages/showcart/showcart'; import { CheckoutPage } from '../pages/checkout/checkout'; import { ProductsFilterPage } from '../pages/products-filter/products-filter'; import { SupportPage } from '../pages/support/support'; import { SettingsPage } from '../pages/settings/settings'; import { SearchPage } from '../pages/search/search'; import { UserService } from '../providers/user-service'; import { DataService } from '../providers/data-service'; import { OrdinalPipe } from '../filters/ordinal'; // 3rd party modules import { Ionic2RatingModule } from 'ionic2-rating'; export function createTranslateLoader(http: Http) { return new TranslateStaticLoader(http, './assets/i18n', '.json'); } // Configure database priority export function provideStorage() { return new Storage(['sqlite', 'indexeddb', 'localstorage'], { name: 'vplanet' }) } const cloudSettings: CloudSettings = { 'core': { 'app_id': 'f8fec798' } }; @NgModule({ declarations: [ vPlanetApp, AboutPage, AccountPage, LoginPage, PopoverPage, SignupPage, WalkThroughPage, HomePage, CategoriesPage, ProductsPage, ProductsFilterPage, ProductDetailPage, SearchPage, WishlistPage, ShowcartPage, CheckoutPage, SettingsPage, SupportPage, OrdinalPipe, ], imports: [ IonicModule.forRoot(vPlanetApp), Ionic2RatingModule, TranslateModule.forRoot({ provide: TranslateLoader, useFactory: createTranslateLoader, deps: [Http] }), CloudModule.forRoot(cloudSettings) ], bootstrap: [IonicApp], entryComponents: [ vPlanetApp, AboutPage, AccountPage, LoginPage, PopoverPage, SignupPage, WalkThroughPage, HomePage, CategoriesPage, ProductsPage, ProductsFilterPage, ProductDetailPage, SearchPage, WishlistPage, ShowcartPage, CheckoutPage, SettingsPage, SupportPage ], providers: [ {provide: ErrorHandler, useClass: IonicErrorHandler}, { provide: Storage, useFactory: provideStorage }, UserService, DataService ] }) export class AppModule {} Ionic components There are many Ionic JavaScript components which we can effectively use while building our application. What's best is to look around for features we will be needing in our application. Let’s get started with Home page of our e-commerce application which will be having a image slider having banners on it. Slides Slides component is multi-section container which can be used in multiple scenarios same astutorial view or banner slider. <ion-slides> component have multiple <ion-slide> elements which can be dragged or swipped left/right. Slides have multiple configuration options available which can be passed in the ion-slides such as autoplay, pager, direction: vertical/horizontal, initialSlide and speed. Using slides is really simple as we just have to include it inside our home.html, no dependency is required for this to be included in the home.ts file: <ion-slides pager #adSlider (ionSlideDidChange)="logLenth()" style="height: 250px"> <ion-slide *ngFor="let banner of banners"> <img [src]="banner"> </ion-slide> </ion-slides> // Defining banners image path export class HomePage { products: any; banners: String[]; constructor() { this.banners = [ 'assets/img/banner-1.webp', 'assets/img/banner-2.webp', 'assets/img/banner-3.webp' ] } } Lists Lists are one of the most used components in many applications. Inside lists we can display rows of information. We will be using lists multiple times inside our application such ason categories page where we are showing multiple sub-categories: // src/pages/categories/categories.html <ion-content class="categories"> <ion-list-header *ngIf="!categoryList">Fetching Categories ....</ion-list-header> <ion-list *ngFor="let cat of categoryList"> <ion-list-header>{{cat.name}}</ion-list-header> <ion-item *ngFor="let subCat of cat.child"> <ion-avatar item-left> <img [src]="subCat.image"> </ion-avatar> <h2>{{subCat.name}}</h2> <p>{{subCat.description}}</p> <button ion-button clear item-right (click)="goToProducts(subCat.id)">View</button> </ion-item> </ion-list> </ion-content> Loading and toast Loading component can be used to indicate some activity while blocking any user interactions. One of the most common cases of using loading component is HTTP/ calls to the server, as we know  it takes time to fetch data from server, till then for good user experience we can show some content showing Loading .. or Login wait .. for login pages. Toast is a small pop-up which provides feedback, usually used when some action  is performed by the user. Ionic 2 now provides toast component as part of its library, previously we have to use native Cordova plugin for toasts which in either case can now be used also. Loading and toast component both have a method create. We have to provide options  while creating these components: // src/pages/login/login.ts import { Component } from '@angular/core'; import { NgForm } from '@angular/forms'; import { NavController, LoadingController, ToastController, Events } from 'ionic-angular'; import { SignupPage } from '../signup/signup'; import { HomePage } from '../home/home'; import { Auth, IDetailedError } from '@ionic/cloud-angular'; import { UserService } from '../../providers/user-service'; @Component({ selector: 'page-user', templateUrl: 'login.html' }) export class LoginPage { login: {email?: string, password?: string} = {}; submitted = false; constructor(public navCtrl: NavController, public loadingCtrl: LoadingController, public auth: Auth, public userService: UserService, public toastCtrl: ToastController, public events: Events) { } onLogin(form: NgForm) { this.submitted = true; if (form.valid) { // start Loader let loading = this.loadingCtrl.create({ content: "Login wait...", duration: 20 }); loading.present(); this.auth.login('basic', this.login).then((result) => { // user is now registered this.navCtrl.setRoot(HomePage); this.events.publish('user:login'); loading.dismiss(); this.showToast(undefined); }, (err: IDetailedError<string[]>) => { console.log(err); loading.dismiss(); this.showToast(err) }); } } showToast(response_message:any) { let toast = this.toastCtrl.create({ message: (response_message ? response_message : "Log In Successfully"), duration: 1500 }); toast.present(); } onSignup() { this.navCtrl.push(SignupPage); } } As, you can see from the previouscode creating a loader and toast is almost similar at code level. The options provided while creating are also similar, we have used loader here while login and toast after that to show the desired message. Setting duration option is good to use, as in case loader is dismissed or not handled properly in code then we will block the user for any further interactions on app. In HTTP calls to server we might get connection issues or failure cases, in that scenario it may end up blocking users. Tabs versussegments Tabs are easiest way to switch between views and organise content at higher level. On the other hand segment is a group of button and can be treated as a local  switch tabs inside a particular component mainly used as a filter. With tabs we can build quick access bar in the footer where we can place Menu options such as Home, Favorites, and Cart. This way we can have one click access to these pages or components. On the other hand we can use segments inside the Account component and divide the data displayed in three segments profile, orders and wallet: // src/pages/account/account.html <ion-header> <ion-navbar> <button menuToggle> <ion-icon name="menu"></ion-icon> </button> <ion-title>Account</ion-title> </ion-navbar> <ion-toolbar [color]="isAndroid ? 'primary' : 'light'" no-border-top> <ion-segment [(ngModel)]="account" [color]="isAndroid ? 'light' : 'primary'"> <ion-segment-button value="profile"> Profile </ion-segment-button> <ion-segment-button value="orders"> Orders </ion-segment-button> <ion-segment-button value="wallet"> Wallet </ion-segment-button> </ion-segment> </ion-toolbar> </ion-header> <ion-content class="outer-content"> <div [ngSwitch]="account"> <div padding-top text-center *ngSwitchCase="'profile'" > <img src="http://www.gravatar.com/avatar?d=mm&s=140"> <h2>{{username}}</h2> <ion-list inset> <button ion-item (click)="updatePicture()">Update Picture</button> <button ion-item (click)="changePassword()">Change Password</button> <button ion-item (click)="logout()">Logout</button> </ion-list> </div> <div padding-top text-center *ngSwitchCase="'orders'" > // Order List data to be shown here </div> <div padding-top text-center *ngSwitchCase="'wallet'"> // Wallet statement and transaction here. </div> </div> </ion-content> This is how we define a segment in Ionic, we don’t need to define anything inside the typescript file for this component. On the other hand with tabs we have to assign a component for  each tab and also can access its methods via Tab instance. Just to mention,  we haven’t used tabs inside our e-commerce application as we are using side menu. One good example will be to look in ionic-conference-app (https://github.com/driftyco/ionic-conference-app) you will find sidemenu and tabs both in single application: / // We currently don’t have Tabs component inside our e-commerce application // Below is sample code about how we can integrate it. <ion-tabs #showTabs tabsPlacement="top" tabsLayout="icon-top" color="primary"> <ion-tab [root]="Home"></ion-tab> <ion-tab [root]="Wishlist"></ion-tab> <ion-tab [root]="Cart"></ion-tab> </ion-tabs> import { HomePage } from '../pages/home/home'; import { WishlistPage } from '../pages/wishlist/wishlist'; import { ShowcartPage } from '../pages/showcart/showcart'; export class TabsPage { @ViewChild('showTabs') tabRef: Tabs; // this tells the tabs component which Pages // should be each tab's root Page Home = HomePage; Wishlist = WishlistPage; Cart = ShowcartPage; constructor() { } // We can access multiple methods via Tabs instance // select(TabOrIndex), previousTab(trimHistory), getByIndex(index) // Here we will console the currently selected Tab. ionViewDidEnter() { console.log(this.tabRef.getSelected()); } } Properties can be checked in the documentation (https://ionicframework.com/docs/v2/api/components/tabs/Tabs/) as, there are many properties available for tabs, like mode, color, tabsPlacement and tabsLayout. Similarly we can configure some tabs properties at Config level also, you will find here what all properties you can configure globally or for specific platform. (https://ionicframework.com/docs/v2/api/config/Config/). Alerts Alerts are the components provided in Ionic for showing trigger alert, confirm, prompts or some specific actions. AlertController can be imported from ionic-angular which allow us to programmatically create and show alerts inside the application. One thing to note here is these are JavaScript pop-up not the native platform pop-up. There is a Cordova plugin cordova-plugin-dialogs (https://ionicframework.com/docs/v2/native/dialogs/) which you can use if native dialog UI elements are required. Currently five types of alerts we can show in Ionic app basic alert, prompt alert, confirmation alert, radio and checkbox alerts: // A radio alert inside src/pages/products/products.html for sorting products <ion-buttons> <button ion-button full clear (click)="sortBy()"> <ion-icon name="menu"></ion-icon>Sort </button> </ion-buttons> // onClick we call sortBy method // src/pages/products/products.ts import { NavController, PopoverController, ModalController, AlertController } from 'ionic-angular'; export class ProductsPage { constructor( public alertCtrl: AlertController ) { sortBy() { let alert = this.alertCtrl.create(); alert.setTitle('Sort Options'); alert.addInput({ type: 'radio', label: 'Relevance', value: 'relevance', checked: true }); alert.addInput({ type: 'radio', label: 'Popularity', value: 'popular' }); alert.addInput({ type: 'radio', label: 'Low to High', value: 'lth' }); alert.addInput({ type: 'radio', label: 'High to Low', value: 'htl' }); alert.addInput({ type: 'radio', label: 'Newest First', value: 'newest' }); alert.addButton('Cancel'); alert.addButton({ text: 'OK', handler: data => { console.log(data); // Here we can call server APIs with sorted data // using the data which user applied. } }); alert.present().then(() => { // Here we place any function that // need to be called as the alert in opened. }); } } Cancel and OK buttons. We have used this here for sorting the products according to relevance price or other sorting values. We can prepare custom alerts also, where we can mention multiple options. Same as in previous example we have five radio options, similarly we can even add a text input box for taking some inputs and submit it. Other than this, while creating alerts remember that there are alert, input and button options properties for all the alerts present in the AlertController component.(https://ionicframework.com/docs/v2/api/components/alert/AlertController/). Some alert options: title:// string: Title of the alert. subTitle:// string(optional): Sub-title of the popup. Message:// string: Message for the alert cssClass:// string: Custom CSS class name inputs:// array: Set of inputs for alert. Buttons:// array(optional): Array of buttons Cards and badges Cards are one of the important component used more often in mobile and web applications. The reason behind cards are so popular because its a great way to organize information and get the users access to quantity of information on smaller screens also. Cards are really flexible and responsive due to all these reasons they are adopted very quickly by developers and companies. We will also be using cards inside our application on home page itself for showing popular products. Let’s see what all different types of cards Ionic provides in its library: Basic cards Cards with header and Footer Cards lists Cards images Background cards Social and map cards Social and map cards are advanced cards, which is build with custom CSS. We can develop similar advance card also. // src/pages/home/home.html <ion-card> <img [src]="prdt.imageUrl"/> <ion-card-content> <ion-card-title no-padding> {{prdt.productName}} </ion-card-title> <ion-row no-padding class="center"> <ion-col> <b>{{prdt.price | currency }} &nbsp; </b><span class="dis count">{{prdt.listPrice | currency}}</span> </ion-col> </ion-row> </ion-card-content> </ion-card> We have used here image card with a image on top and below we have favorite and view button icons. Similarly, we can use different types of cards where ever its required. Also, at the same time we can customize our cards and mix two types of card using their specific CSS classes or elements. Badges are small component used to show small information, for example showing number of items in cart above the cart icon. We have used it in our e-commerce application for showing the ratings of product. <ion-badge width="25">4.1</ion-badge> Summary In this article we have learned, building vPlanet Commerce and Ionic components. Resources for Article: Further resources on this subject: Lync 2013 Hybrid and Lync Online [article] Optimizing JavaScript for iOS Hybrid Apps [article] Creating Mobile Dashboards [article]
Read more
  • 0
  • 0
  • 36198

article-image-getting-started-polygons-blender
Sunith Shetty
05 Jun 2018
11 min read
Save for later

Building VR objects in React V2 2.0: Getting started with polygons in Blender

Sunith Shetty
05 Jun 2018
11 min read
A polygon is an n-sided object composed of vertices (points), edges, and faces. A face can face in or out or be double-sided. For most real-time VR, we use single–sided polygons; we noticed this when we first placed a plane in the world, depending on the orientation, you may not see it. In today’s tutorial, we will understand why Polygons are the best way to present real-time graphics. To really show how this all works, I'm going to show the internal format of an OBJ file. Normally, you won't hand edit these — we are beyond the days of VR constructed with a few thousand polygons (my first VR world had a train that represented downloads, and it had six polygons, each point lovingly crafted by hand), so hand editing things isn't necessary, but you may need to edit the OBJ files to include the proper paths or make changes your modeler may not do natively–so let's dive in! This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you'll gain a deeper understanding of Virtual Reality and a full-fledged  VR app to add to your profile. Polygons are constructed by creating points in 3D space, and connecting them with faces. You can consider that vertices are connected by lines (most modelers work this way), but in the native WebGL that React VR is based on, it's really just faces. The points don't really exist by themselves, but more or less "anchor" the corners of the polygon. For example, here is a simple triangle, modeled in Blender: In this case, I have constructed a triangle with three vertices and one face (with just a flat color, in this case green). The edges, shown in yellow or lighter shade, are there for the convenience of the modeler and won't be explicitly rendered. Here is what the triangle looks like inside our gallery: If you look closely in the Blender photograph, you'll notice that the object is not centered in the world. When it exports, it will export with the translations that you have applied in Blender. This is why the triangle is slightly off center on the pedestal. The good news is that we are in outer space, floating in orbit, and therefore do not have to worry about gravity. (React VR does not have a physics engine, although it is straightforward to add one.) The second thing you may notice is that the yellow lines (lighter gray lines in print) around the triangle in Blender do not persist in the VR world. This is because the file is exported as one face, which connects three vertices. The plural of vertex is vertices, not vertexes. If someone asks you about vertexes, you can laugh at them almost as much as when someone pronouncing Bézier curve as "bez ee er." Ok, to be fair, I did that once, now I always say Beh zee a. Okay, all levity aside, now let's make it look more interesting than a flat green triangle. This is done through something usually called as texture mapping. Honestly, the phrase "textures" and "materials" often get swapped around interchangeably, although lately they have sort of settled down to materials meaning anything about an object's physical appearance except its shape; a material could be how shiny it is, how transparent it is, and so on. A texture is usually just the colors of the object — tile is red, skin may have freckles — and is therefore usually called a texture map which is represented with a JPG, TGA, or other image format. There is no real cross software file format for materials or shaders (which are usually computer code that represents the material). When it comes time to render, there are some shader languages that are standard, although these are not always used in CAD programs. You will need to learn what your CAD program uses, and become proficient in how it handles materials (and texture maps). This is far beyond the scope of this book. The OBJ file format (which is what React VR usually uses) allows the use of several different texture maps to properly construct the material. It also can indicate the material itself via parameters coded in the file. First, let's take a look at what the triangle consists of. We imported OBJ files via the Model keyword: <Model source={{ obj: asset('OneTri.obj'), mtl: asset('OneTri.mtl'), }} style={{ transform: [ { translate: [ -0, -1, -5. ] }, { scale: .1 }, ] }} /> First, let's open the MTL (material) file (as the .obj file uses the .mtl file). The OBJ file format was developed by Wavefront: # Blender MTL File: 'OneTri.blend' # Material Count: 1 newmtl BaseMat Ns 96.078431 Ka 1.000000 1.000000 1.000000 Kd 0.040445 0.300599 0.066583 Ks 0.500000 0.500000 0.500000 Ke 0.000000 0.000000 0.000000 Ni 1.000000 d 1.000000 illum 2 A lot of this is housekeeping, but the important things are the following parameters: Ka : Ambient color, in RGB format Kd : Diffuse color, in RGB format Ks : Specular color, in RGB format Ns : Specular exponent, from 0 to 1,000 d : Transparency (d meant dissolved). Note that WebGL cannot normally show refractive materials, or display real volumetric materials and raytracing, so d is simply the percentage of how much light is blocked. 1 (the default) is fully opaque. Note that d in the .obj specification works for illum mode 2. Tr : Alternate representation of transparency; 0 is fully opaque. illum <#> (a number from 0 to 10). Not all illumination models are supported by WebGL. The current list is: Color on and Ambient off. Color on and Ambient on. Highlight on (and colors) <= this is the normal setting. There are other illumination modes, but are currently not used by WebGL. This of course, could change. Ni is optical density. This is important for CAD systems, but the chances of it being supported in VR without a lot of tricks are pretty low.  Computers and video cards get faster and faster all the time though, so maybe optical density and real time raytracing will be supported in VR eventually, thanks to Moore's law (statistically, computing power roughly doubles every two years or so). Very important: Make sure you include the "lit" keyword with all of your model declarations, otherwise the loader will assume you have only an emissive (glowing) object and will ignore most of the parameters in the material file! YOU HAVE BEEN WARNED. It'll look very weird and you'll be completely confused. Don't ask me why I know! The OBJ file itself has a description of the geometry. These are not usually something you can hand edit, but it's useful to see the overall structure. For the simple object, shown before, it's quite manageable: # Blender v2.79 (sub 0) OBJ File: 'OneTri.blend' # www.blender.org mtllib OneTri.mtl o Triangle v -7.615456 0.218278 -1.874056 v -4.384528 15.177612 -6.276536 v 4.801097 2.745610 3.762014 vn -0.445200 0.339900 0.828400 usemtl BaseMat s off f 3//1 2//1 1//1 First, you see a comment (marked with #) that tells you what software made it, and the name of the original file. This can vary. The mtllib is a call out to a particular material file, that we already looked at. The o lines (and g line is if there a group) define the name of the object and group; although React VR doesn't  really  use these (currently), in most modeling packages this will be listed in the hierarchy of objects. The v and vn keywords are where it gets interesting, although these are still not something visible. The v keyword creates a vertex in x, y, z space. The vertices built will later be connected into polygons. The vn establishes the normal for those objects, and vt will create the texture coordinates of the same points. More on texture coordinates in a bit. The usemtl BaseMat establishes what material, specified in your .mtl file, that will be used for the following faces. The s off means smoothing is turned off. Smoothing and vertex normals can make objects look smooth, even if they are made with very few polygons. For example, take a look at these two teapots; the first is without smoothing. Looks pretty computer graphics like, right? Now, have a look at the same teapot with the "s 1" parameter specified throughout, and normals included in the file.  This is pretty normal (pun intended), what I mean is most CAD software will compute normals for you. You can make normals; smooth, sharp, and add edges where needed. This adds detail without excess polygons and is fast to render. The smooth teapot looks much more real, right? Well, we haven't seen anything yet! Let's discuss texture. I didn't used to like Sushi because of the texture. We're not talking about that kind of texture. Texture mapping is a lot like taking a piece of Christmas wrapping paper and putting it around an odd shaped object. Just like when you get that weird looking present at Christmas and don't know quite what to do, sometimes doing the wrapping doesn't have a clear right way to do it. Boxes are easy, but most interesting objects aren't always a box. I found this picture online with the caption "I hope it's an X-Box." The "wrapping" is done via U, V coordinates in the CAD system. Let's take a look at a triangle, with proper UV coordinates. We then go get our wrapping paper, that is to say, we take an image file we are going to use as the texture, like this: We then wrap that in our CAD program by specifying this as a texture map. We'll then export the triangle, and put it in our world. You would probably have expected to see "left and bottom" on the texture map. Taking a closer look in our modeling package (Blender still) we see that the default UV mapping (using Blender's standard tools) tries to use as much of the texture map as possible, but from an artistic standpoint, may not be what we want. This is not to show that Blender is "yer doin' it wrong" but to make the point that you've got to check the texture mapping before you export. Also, if you are attempting to import objects without U,V coordinates, double-check them! If you are hand editing an .mtl file, and your textures are not showing up, double–check your .obj file and make sure you have vt lines; if you don't, the texture will not show up. This means the U,V coordinates for the texture mapping were not set. Texture mapping is non-trivial; there is quite an art about it and even entire books written about texturing and lighting. After having said that, you can get pretty far with Blender and any OBJ file if you've downloaded something from the internet and want to make it look a little better. We'll show you how to fix it. The end goal is to get a UV map that is more usable and efficient. Not all OBJ file exporters export proper texture maps, and frequently .obj files you may find online may or may not have UVs set. You can use Blender to fix the unwrapping of your model. We have several good Blender books to provide you a head start in it. You can also use your favorite CAD modeling program, such as Max, Maya, Lightwave, Houdini, and so on. This is important, so I'll mention it again in an info box. If you already use a different polygon modeler or CAD page, you don't have to learn Blender; your program will undoubtedly work fine.  You can skim this section. If you don't want to learn Blender anyway, you can download all of the files that we construct from the Github link. You'll need some of the image files if you do work through the examples. Files for this article are at: http://bit.ly/VR_Chap7. To summarize, we learned the basics of polygon modeling with Blender, also got to know the importance of polygon budgets, how to export those models, and details about the OBJ/MTL file formats. To know more about how to make virtual worlds look real, do check out this book Getting Started with React VR. Top 7 modern Virtual Reality hardware systems Types of Augmented Reality targets Unity plugins for augmented reality application development
Read more
  • 0
  • 0
  • 34737

article-image-how-android-app-developers-can-convert-iphone-apps
Michael Kordvani
02 May 2018
5 min read
Save for later

How Android app developers can convert iPhone apps

Michael Kordvani
02 May 2018
5 min read
Businesses like to cast their nets as wide as possible in search of new customers. This type of broad outreach requires designing mobile apps for both iOS and Android phones. Although iPhones are very popular in the U.S. market, if you want to step up and attract global customers, you need to expand your product to the Android platform. Most Android app developers will face this challenge at some point: how to create an Android app from an iPhone app, and make it at least as successful as the primary product. It's not surprising that developers tend to concentrate on building up their skills for one platform in particular. Both platforms have their challenges. Spreading yourself too thin in an effort to meet the requirements for both phones can mean that the user experience suffers. But the challenges can be overcome. iPhone apps are great, but limited in terms of market size. Android apps are the biggest market players, and companies often ask the same team of Android app developers to take on both projects at once. With a few tips and tricks to help you along, you’ll be able to make your project a success. What are the benefits of redesigning an iPhone App into an Android App? Before converting your iPhone app into an Android app, it’s important to keep in mind that enlarging the customer base is not the only benefit. You will also get the chance to add more features, diversify money-making methods with new options for in-app purchasing and advertisements, as well as get a full product overhaul at only a fraction of the cost of starting from scratch. These are the obvious reasons why companies usually don’t overlook the possibility of iPhone app conversion. When a company has a team of iPhone and Android app developers and can save on new projects, it often pays off handsomely in the end. Hiring a product manager to oversee the process is not a bad idea if you have the budget for it. A manager can help the team understand the similar elements of these otherwise different platforms. Despite the UX/UI design differences in terms of navigation, icons and app architecture, you still need to code with customer requirements in mind. Also, before you start redesigning the product, keep in mind that the business model may need to be tweaked and the store submission process is quite different. UX and UI design differences between Android and iOS The platforms have significant differences in terms of design. You cannot simply copy the elements from an iPhone to an Android phone environment, at least not in a clear-cut way. You must design with the already-set styles in mind. For example, Android apps use a specific icon library, which is different from the one used for iOS. Android app developers and designers work with a wider color palette, varying in nuances and shades, while iPhone apps are more standardized. Roboto is the preferred Android font, and San Francisco is its iPhone counterpart. The hierarchical typography is not the same either. Because of the variations in the navigation tools, the user interface looks very different on Android phones. iPhone navigation is concentrated at the bottom; Android phones use more side and top navigation bars. Don’t forget about the thumb issue. iPhones are generally built around an average-sized thumb. With Android, you have a bit more leeway to accommodate all thumb sizes. Even if you focus only on these design basics, the user interface on an iPhone will still look different than the one on an Android smartphone. If we factor in button styles (flat on iOS vs. flat/floating on Android), grids and action sheets, as well as dropdown menus, things get even more complex. This guide offers a helpful comparison overview you can use when converting iPhone apps into Android apps. Sizing and resolution on Android phones also present their own challenges. Designers need to include different Android screen resolutions, which is already significantly more challenging than designing for the unified iPhone layout. iPhone app developers use points, and Android app developers use pixels when measuring screen objects, such as fonts and icons. The pt/px ratio is 0.75. At the same time, clients need some degree of standardization for brand recognition. They don’t want to confuse users with two apps that don’t appear to be from the same company. Further considerations Android app developers need to make about code and external libraries It can be challenging to find a team of Android app developers who also know how to code in iOS-friendly languages. However, it may be more efficient and cost-effective than working with two different teams. Programming languages that work for both Android apps and iOS apps are Kotlin and C-languages. Nonetheless, both platforms have widely preferred languages: Swift for iPhone apps and Java for Android apps. Android app developers should also check for compatibility before using external libraries and tools in the conversion project. Although challenging, converting iPhone apps for the Android OS platform is far from impossible. After all, people do it every day, as dual-platform apps are the rule rather than the exception. All you need to do to make a great product is to understand the key differences and make the necessary adjustments. Build your first Android app with Kotlin How to Secure and Deploy an Android App Why are Android developers switching from Java to Kotlin?
Read more
  • 0
  • 0
  • 34455

article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 34327

article-image-build-first-android-app-kotlin
Aarthi Kumaraswamy
13 Apr 2018
10 min read
Save for later

Build your first Android app with Kotlin

Aarthi Kumaraswamy
13 Apr 2018
10 min read
Android application with Kotlin is an area which shines. Before getting started on this journey, we must set up our systems for the task at hand. A major necessity for developing Android applications is a suitable IDE - it is not a requirement but it makes the development process easier. Many IDE choices exist for Android developers. The most popular are: Android Studio Eclipse IntelliJ IDE Android Studio is by far the most powerful of the IDEs available with respect to Android development. As a consequence, we will be utilizing this IDE in all Android-related chapters in this book. Setting up Android Studio At the time of writing, the version of Android Studio that comes bundled with full Kotlin support is Android Studio 3.0. The canary version of this software can be downloaded from this website. Once downloaded, open the downloaded package or executable and follow the installation instructions. A setup wizard exists to guide you through the IDE setup procedure: Continuing to the next setup screen will prompt you to choose which type of Android Studio setup you'd like: Select the Standard setup and continue to the next screen. Click Finish on the Verify Settings screen. Android Studio will now download the components required for your setup. You will need to wait a few minutes for the required components to download: Click Finish once the component download has completed. You will be taken to the Android Studio landing screen. You are now ready to use Android Studio: [box type="note" align="" class="" width=""]You may also want to read Benefits of using Kotlin Java for Android programming.[/box] Building your first Android application with Kotlin Without further ado, let's explore how to create a simple Android application with Android Studio. We will be building the HelloApp. The HelloApp is an app that displays Hello world! on the screen upon the click of a button. On the Android Studio landing screen, click Start a new Android Studio project. You will be taken to a screen where you will specify some details that concern the app you are about to build, such as the name of the application, your company domain, and the location of the project. Type in HelloApp as the application name and enter a company domain. If you do not have a company domain name, fill in any valid domain name in the company domain input box – as this is a trivial project, a legitimate domain name is not required. Specify the location in which you want to save this project and tick the checkbox for the inclusion of Kotlin support. After filling in the required parameters, continue to the next screen: Here, we are required to specify our target devices. We are building this application to run on smartphones specifically, hence tick the Phone and Tablet checkbox if it's not already ticked. You will notice an options menu next to each device option. This dropdown is used to specify the target API level for the project being created. An API level is an integer that uniquely identifies the framework API division offered by a version of the Android platform. Select API level 15 if not already selected and continue to the next screen: On the next screen, we are required to select an activity to add to our application. An activity is a single screen with a unique user interface—similar to a window. We will discuss activities in more depth in Chapter 2, Building an Android Application – Tetris. For now, select the empty activity and continue to the next screen. Now, we need to configure the activity that we just specified should be created. Name the activity HelloActivityand ensure the Generate Layout File and Backwards Compatibility checkboxes are ticked: Now, click the Finish button. Android Studio may take a few minutes to set up your project. Once the setup is complete, you will be greeted by the IDE window containing your project files. [box type="note" align="" class="" width=""]Errors pertaining to the absence of required project components may be encountered at any point during project development. Missing components can be downloaded from the SDK manager. [/box] Make sure that the project window of the IDE is open (on the navigation bar, select View | Tool Windows | Project) and the Android view is currently selected from the drop-down list at the top of the Project window. You will see the following files at the left-hand side of the window: app | java | com.mydomain.helloapp | HelloActivity.java: This is the main activity of your application. An instance of this activity is launched by the system when you build and run your application: app | res | layout | activity_hello.xml: The user interface for HelloActivity is defined within this XML file. It contains a TextView element placed within the ViewGroup of a ConstraintLayout. The text of the TextView has been set to Hello World! app | manifests | AndroidManifest.xml: The AndroidManifest file is used to describe the fundamental characteristics of your application. In addition, this is the file in which your application's components are defined. Gradle Scripts | build.gradle: Two build.gradle files will be present in your project. The first build.gradle file is for the project and the second is for the app module. You will most frequently work with the module's build.gradle file for the configuration of the compilation procedure of Gradle tools and the building of your app. [box type="note" align="" class="" width=""]Gradle is an open source build automation system used for the declaration of project configurations. In Android, Gradle is utilized as a build tool with the goal of building packages and managing application dependencies. [/box] Creating a user interface A user interface (UI) is the primary means by which a user interacts with an application. The user interfaces of Android applications are made by the creation and manipulation of layout files. Layout files are XML files that exist in app | res | layout. To create the layout for the HelloApp, we are going to do three things: Add a LinearLayout to our layout file Place the TextView within the LinearLayout and remove the android:text attribute it possesses Add a button to the LinearLayout Open the activity_hello.xml file if it's not already opened. You will be presented with the layout editor. If the editor is in the Design view, change it to its Text view by toggling the option at the bottom of the layout editor. Now, your layout editor should look similar to that of the following screenshot: ViewGroup that arranges child views in either a horizontal or vertical manner within a single column. Copy the code snippet of our required LinearLayout from the following block and paste it within the ConstraintLayout preceding the TextView: <LinearLayout android:id="@+id/ll_component_container" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:gravity="center"> </LinearLayout> Now, copy and paste the TextView present in the activity_hello.xml file into the body of the LinearLayout element and remove the android:text attribute: <LinearLayout android:id="@+id/ll_component_container" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:gravity="center"> <TextView android:id="@+id/tv_greeting" android:layout_width="wrap_content" android:layout_height="wrap_content"       android:textSize="50sp" /> </LinearLayout> Lastly, we need to add a button element to our layout file. This element will be a child of our LinearLayout. To create a button, we use the Button element: <LinearLayout android:id="@+id/ll_component_container" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:gravity="center"> <TextView android:id="@+id/tv_greeting" android:layout_width="wrap_content" android:layout_height="wrap_content"       android:textSize="50sp" /> <Button       android:id="@+id/btn_click_me" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="16dp" android:text="Click me!"/> </LinearLayout> Toggle to the layout editor's design view to see how the changes we have made thus far translate when rendered on the user interface: Now we have our layout, but there's a problem. Our CLICK ME! button does not actually do anything when clicked. We are going to fix that by adding a listener for click events to the button. Locate and open the HelloActivity.java file and edit the function to add the logic for the CLICK ME! button's click event as well as the required package imports, as shown in the following code: package com.mydomain.helloapp import android.support.v7.app.AppCompatActivity import android.os.Bundle import android.text.TextUtils import android.widget.Button import android.widget.TextView import android.widget.Toast class HelloActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_hello) val tvGreeting = findViewById<TextView>(R.id.tv_greeting) val btnClickMe = findViewById<Button>(R.id.btn_click_me) btnClickMe.setOnClickListener { if (TextUtils.isEmpty(tvGreeting.text)) { tvGreeting.text = "Hello World!" } else { Toast.makeText(this, "I have been clicked!",                       Toast.LENGTH_LONG).show() } } } } In the preceding code snippet, we have added references to the TextView and Button elements present in our activity_hello layout file by utilizing the findViewById function. The findViewById function can be used to get references to layout elements that are within the currently-set content view. The second line of the onCreate function has set the content view of HelloActivity to the activity_hello.xml layout. Next to the findViewById function identifier, we have the TextView type written between two angular brackets. This is called a function generic. It is being used to enforce that the resource ID being passed to the findViewById belongs to a TextView element. After adding our reference objects, we set an onClickListener to btnClickMe. Listeners are used to listen for the occurrence of events within an application. In order to perform an action upon the click of an element, we pass a lambda containing the action to be performed to the element's setOnClickListener method. When btnClickMe is clicked, tvGreeting is checked to see whether it has been set to contain any text. If no text has been set to the TextView, then its text is set to Hello World!, otherwise a toast is displayed with the I have been clicked! text. Running the Android application In order to run the application, click the Run 'app' (^R) button at the top-right side of the IDE window and select a deployment target. The HelloApp will be built, installed, and launched on the deployment target: You may use one of the available prepackaged virtual devices or create a custom virtual device to use as the deployment target.  You may also decide to connect a physical Android device to your computer via USB and select it as your target. The choice is up to you. After selecting a deployment device, click OK to build and run the application. Upon launching the application, our created layout is rendered: When CLICK ME! is clicked, Hello World! is shown to the user: Subsequent clicks of the CLICK ME! button display a toast message with the text I have been clicked!: You enjoyed an excerpt from the book, Kotlin Programming By Example by Iyanu Adelekan. Start building and deploying Android apps with Kotlin using this book. Check out other related posts: Creating a custom layout implementation for your Android app Top 5 Must-have Android Applications OpenCV and Android: Making Your Apps See      
Read more
  • 0
  • 0
  • 33730

article-image-getting-started-kinect
Packt
30 Aug 2013
8 min read
Save for later

Getting Started with Kinect

Packt
30 Aug 2013
8 min read
(For more resources related to this topic, see here.) Before the birth of Microsoft Kinect, few people were familiar with the technology of motion sensing. Similar devices have been invented and developed originally for monitoring aerial and undersea aggressors in wars. Then in the non-military cases, motion sensors are widely used in alarm systems, lighting systems and so on, which could detect if someone or something disrupts the waves throughout a room and trigger predefined events. Although radar sensors and modern infrared motion sensors are used more popularly in our life, we seldom notice their existence, and can hardly make use of these devices in our own applications. But Kinect changed everything from the time it was launched in North America at the end of 2010. Different from most other user input controllers, Kinect enables users to interact with programs without really touching a mouse or a pad, but only through gestures. In a top-level view, a Kinect sensor is made up of an RGB camera, a depth sensor, an IR emitter, and a microphone array, which consists of several microphones for sound and voice recognition. A standard Kinect (for Windows) equipment is shown as follows: The Kinect device The Kinect drivers and software, which are either from Microsoft or from third-party companies, can even track and analyze advanced gestures and skeletons of multiple players. All these features make it possible to design brilliant and exciting applications with handsfree user inputs. And until now, Kinect had already brought a lot of games and software to an entirely new level. It is believed to be the bridge between the physical world we exist in and the virtual reality we create, and a completely new way of interacting with arts and a pro fitable business opportunity for individuals and companies. In this article, we will try to make an interesting game with the popular Kinect technology for user inputs, As Kinect captures the camera and depth images as video streams, we can also merge this view of our real-world environment with virtual elements, which is called Augmented Reality (AR) . This enables users to feel as if they appear and live in a nonexistent world, or something unbelievable exists in the physical earth. In this article, we will first introduce the installation of Kinect hardware and software on personal computers, and then consider a good enough idea compounded of Kinect and augmented reality elements. Before installing the Kinect device on your PCs, obviously you should buy Kinect equipment first. In this article, we will depend on Kinect for Windows or Kinect for Xbox 360, which can be learned about and bought at: http://www.microsoft.com/en-us/kinectforwindows/ http://www.xbox.com/en-US/kinect Please note that you don't need to buy an Xbox 360 at all. Kinect will be connected to PCs so that we can make custom programs for it. An alternative choice is Kinect for Windows, which is located at: http://www.microsoft.com/en-us/kinectforwindows/purchase/ The uses and developments of both will be of no difference for our cases. Installation of Kinect It is strongly suggested that you have a Windows 7 operating system or higher. It can be either 32-bit or 64-bit and with dual-core or faster processors. Linux developers can also benefit from third-party drivers and SDKs to manipulate Kinect components. Before we start to discuss the software installation, you can download both the Microsoft Kinect SDK and the Developer Toolkit from: http://www.microsoft.com/en-us/kinectforwindows/develop/developerdownloads.aspx In this article, we prefer to develop Kinect-based applications using Kinect SDK Version 1.5 (or higher versions) and the C++ language. Later versions should be backward compatible so that the source code provided in this article doesn't need to be changed. Setting up your Kinect software on PCs After we have downloaded the SDK and the Developer Toolkit, it's time for us to install them on the PC and ensure that they can work with the Kinect hardware. Let's perform the following steps: Run the setup executable with administrator permissions. Select I agree to the license terms and conditions after reading the License Agreement. The Kinect SDK setup dialog Follow the steps until the SDK installation has finished. And then, install the toolkit following similar instructions. The hardware installation is easy: plug the ends of the cable into the USB port and a power point, and plug the USB into your PC. Wait for the drivers to be found automatically. Now, start the Developer Toolkit Browser, choose Samples: C++ from the tabs, and find and run the sample with the name Skeletal Viewer. You should be able to see a new window demonstrating the depth/ skeleton/color images of the current physical scene, which is similar to the following image: The depth (left), skeleton (middle), and color (right) images read from Kinect Why did I do that? We chose to set up the SDK software at first so that it will install the motor and camera drivers, the APIs, and the documentations, as well as the toolkit including resources and samples onto the PC. If the operation steps are inversed, that is, the hardware is connected before installing the SDK, your Windows OS may not be able to recognize the device. Just start the SDK setup at this time and the device should be identified again during the installation process. But before actually using Kinect, you still have to ensure there is nothing between the device and you (the player). And it's best to keep the play space at least 1.8 m wide and about 1.8 m to 3.6 m long from the sensor. If you have more than one Kinect device, don't keep them face-to-face as there may be infrared interference between them. If you have multiple Kinects to install on the same PC, please note that one USB root hub can have one and only one Kinect connected. The problem happens because Kinect takes over 50 percent of the USB bandwidth, and it needs an individual USB controller to run. So plugging more than one device on the same USB hub means only one of them will work. The depth image at the left in the preceding image shows a human (in fact, the author) standing in front of the camera. Some parts may be totally black if they are too near (often less than 80 cm), or too far (often more than 4 m). If you are using Kinect for Windows, you can turn on Near Mode to show objects that are near the camera; however, Kinect for Xbox 360 doesn't have such features. You can read more about the software and hardware setup at: http://www.microsoft.com/en-us/kinectforwindows/purchase/sensor_setup.aspx The idea of the AR-based Fruit Ninja game Now it's time for us to define the goal we are going to achieve in this article. As a quick but practical guide for Kinect and augmented reality, we should be able to make use of the depth detection, video streaming, and motion tracking functionalities in our project. 3D graphics APIs are also important here because virtual elements should also be included and interacted with irregular user inputs not common mouse or keyboard inputs). A fine example is the Fruit Ninja game, which is already a very popular game all over the world. Especially on mobile devices like smartphones and pads, you can see people destroy different kinds of fruits by touching and swiping their fingers on the screen. With the help of Kinect, our arms can act as blades to cut off flying fruits, and our images can also be shown along with the virtual environment so that we can determine the posture of our bodies and position of our arms through the screen display. Unfortunately, this idea is not fresh enough for now. Already, there are commercial products with similar purposes available in the market; for example: http://marketplace.xbox.com/en-US/Product/Fruit-Ninja-Kinect/66acd000-77fe-1000-9115-d80258410b79 But please note that we are not going to design a completely different product here, or even bring it to the market after finishing this article. We will only learn how to develop Kinect-based applications, work in our own way from the very beginning, and benefit from the experience in our professional work or as an amateur. So it is okay to reinvent the wheel this time, and have fun in the process and the results. Summary Kinect, which is a portmanteau of the words "kinetic" and "connect", is a motion sensor developed and released by Microsoft. It provides a natural user interface (NUI) for tracking and manipulating handsfree user inputs such as gestures and skeleton motions. It can be considered as one of the most successful consumer electronics device in recent years, and we will be using this novel device to build the Fruit Ninja game in this article. We will focus on developing Kinect and AR-based applications on Windows 7 or higher using the Microsoft Kinect SDK 1.5 (or higher) and the C++ programming language. Mainly, we have introduced how to install Kinect for Windows SDK in this article. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] Mission Running in EVE Online [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 32932
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-flexible-layouts-swift-and-uistackview-0
Milton Moura
04 Jan 2016
12 min read
Save for later

Flexible Layouts with Swift and UIStackview

Milton Moura
04 Jan 2016
12 min read
In this post we will build a Sign In and Password Recovery form with a single flexible layout, using Swift and the UIStackView class, which has been available since the release of the iOS 9 SDK. By taking advantage of UIStackView's properties, we will dynamically adapt to the device's orientation and show / hide different form components with animations. The source code for this post can the found in this github repository. Auto Layout Auto Layout has become a requirement for any application that wants to adhere to modern best practices of iOS development. When introduced in iOS 6, it was optional and full visual support in Interface Builder just wasn't there. With the release of iOS 8 and the introduction of Size Classes, the tools and the API improved but you could still dodge and avoid Auto Layout. But now, we are at a point where, in order to fully support all device sizes and split-screen multitasking on the iPad, you must embrace it and design your applications with a flexible UI in mind. The problem with Auto Layout Auto Layout basically works as an linear equation solver, taking all of the constraints defined in your views and subviews, and calculates the correct sizes and positioning for them. One disadvantage of this approach is that you are obligated to define, typically, between 2 to 6 constraints for each control you add to your view. With different constraint sets for different size classes, the total number of constraints increases considerably and the complexity of managing them increases as well. Enter the Stack View In order to reduce this complexity, the iOS 9 SDK introduced the UIStackView, an interface control that serves the single purpose of laying out collections of views. A UIStackView will dynamically adapt its containing views' layout to the device's current orientation, screen sizes and other changes in its views. You should keep the following stack view properties in mind: The views contained in a stack view can be arranged either Vertically or Horizontally, in the order they were added to the arrangedSubviews array. You can embed stack views within each other, recursively. The containing views are laid out according to the stack view's [distribution](...) and [alignment](...) types. These attributes specify how the view collection is laid out across the span of the stack view (distribution) and how to align all subviews within the stack view's container (alignment). Most properties are animatable and inserting / deleting / hiding / showing views within an animation block will also be animated. Even though you can use a stack view within an UIScrollView, don't try to replicate the behaviour of an UITableView or UICollectionView, as you'll soon regret it. Apple recommends that you use UIStackView for all cases, as it will seriously reduce constraint overhead. Just be sure to judiciously use compression and content hugging priorities to solve possible layout ambiguities. A Flexible Sign In / Recover Form The sample application we'll build features a simple Sign In form, with the option for recovering a forgotten password, all in a single screen. When tapping on the "Forgot your password?" button, the form will change, hiding the password text field and showing the new call-to-action buttons and message labels. By canceling the password recovery action, these new controls will be hidden once again and the form will return to it's initial state. 1. Creating the form This is what the form will look like when we're done. Let's start by creating a new iOS > Single View Application template. Then, we add a new UIStackView to the ViewController and add some constraints for positioning it within its parent view. Since we want a full screen width vertical form, we set its axis to .Vertical, the alignment to .Fill and the distribution to .FillProportionally, so that individual views within the stack view can grow bigger or smaller, according to their content.    class ViewController : UIViewController    {        let formStackView = UIStackView()        ...        override func viewDidLoad() {            super.viewDidLoad()                       // Initialize the top-level form stack view            formStackView.axis = .Vertical            formStackView.alignment = .Fill            formStackView.distribution = .FillProportionally            formStackView.spacing = 8            formStackView.translatesAutoresizingMaskIntoConstraints = false                       view.addSubview(formStackView)                       // Anchor it to the parent view            view.addConstraints(                NSLayoutConstraint.constraintsWithVisualFormat("H:|-20-[formStackView]-20-|", options: [.AlignAllRight,.AlignAllLeft], metrics: nil, views: ["formStackView": formStackView])            )            view.addConstraints(                NSLayoutConstraint.constraintsWithVisualFormat("V:|-20-[formStackView]-8-|", options: [.AlignAllTop,.AlignAllBottom], metrics: nil, views: ["formStackView": formStackView])            )            ...        }        ...    } Next, we'll add all the fields and buttons that make up our form. We'll only present a couple of them here as the rest of the code is boilerplate. In order to refrain UIStackView from growing the height of our inputs and buttons as needed to fill vertical space, we add height constraints to set the maximum value for their vertical size.    class ViewController : UIViewController    {        ...        var passwordField: UITextField!        var signInButton: UIButton!        var signInLabel: UILabel!        var forgotButton: UIButton!        var backToSignIn: UIButton!        var recoverLabel: UILabel!        var recoverButton: UIButton!        ...               override func viewDidLoad() {            ...                       // Add the email field            let emailField = UITextField()            emailField.translatesAutoresizingMaskIntoConstraints = false            emailField.borderStyle = .RoundedRect            emailField.placeholder = "Email Address"            formStackView.addArrangedSubview(emailField)                       // Make sure we have a height constraint, so it doesn't change according to the stackview auto-layout            emailField.addConstraints(                NSLayoutConstraint.constraintsWithVisualFormat("V:[emailField(<=30)]", options: [.AlignAllTop, .AlignAllBottom], metrics: nil, views: ["emailField": emailField])             )                       // Add the password field            passwordField = UITextField()            passwordField.translatesAutoresizingMaskIntoConstraints = false            passwordField.borderStyle = .RoundedRect            passwordField.placeholder = "Password"            formStackView.addArrangedSubview(passwordField)                       // Make sure we have a height constraint, so it doesn't change according to the stackview auto-layout            passwordField.addConstraints(                 NSLayoutConstraint.constraintsWithVisualFormat("V:[passwordField(<=30)]", options: .AlignAllCenterY, metrics: nil, views: ["passwordField": passwordField])            )            ...        }        ...    } 2. Animating by showing / hiding specific views By taking advantage of the previously mentioned properties of UIStackView, we can transition from the Sign In form to the Password Recovery form by showing and hiding specific field and buttons. We do this by setting the hidden property within a UIView.animateWithDuration block.    class ViewController : UIViewController    {        ...        // Callback target for the Forgot my password button, animates old and new controls in / out        func forgotTapped(sender: AnyObject) {            UIView.animateWithDuration(0.2) { [weak self] () -> Void in                self?.signInButton.hidden = true                self?.signInLabel.hidden = true                self?.forgotButton.hidden = true                self?.passwordField.hidden = true                self?.recoverButton.hidden = false                self?.recoverLabel.hidden = false                self?.backToSignIn.hidden = false            }        }               // Callback target for the Back to Sign In button, animates old and new controls in / out        func backToSignInTapped(sender: AnyObject) {            UIView.animateWithDuration(0.2) { [weak self] () -> Void in                self?.signInButton.hidden = false                self?.signInLabel.hidden = false                self?.forgotButton.hidden = false                self?.passwordField.hidden = false                self?.recoverButton.hidden = true                self?.recoverLabel.hidden = true                self?.backToSignIn.hidden = true            }        }        ...    } 3. Handling different Size Classes Because we have many vertical input fields and buttons, space can become an issue when presenting in a compact vertical size, like the iPhone in landscape. To overcome this, we add a stack view to the header section of the form and change its axis orientation between Vertical and Horizontal, according to the current active size class.    override func viewDidLoad() {        ...        // Initialize the header stack view, that will change orientation type according to the current size class        headerStackView.axis = .Vertical        headerStackView.alignment = .Fill        headerStackView.distribution = .Fill        headerStackView.spacing = 8        headerStackView.translatesAutoresizingMaskIntoConstraints = false        ...    }       // If we are presenting in a Compact Vertical Size Class, let's change the header stack view axis orientation    override func willTransitionToTraitCollection(newCollection: UITraitCollection, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {        if newCollection.verticalSizeClass == .Compact {            headerStackView.axis = .Horizontal        } else {            headerStackView.axis = .Vertical        }    } 4. The flexible form layout So, with a couple of UIStackViews, we've built a flexible form only by defining a few height constraints for our input fields and buttons, with all the remaining constraints magically managed by the stack views. Here is the end result: Conclusion We have included in the sample source code a view controller with this same example but designed with Interface Builder. There, you can clearly see that we have less than 10 constraints, on a layout that could easily have up to 40-50 constraints if we had not used UIStackView. Stack Views are here to stay and you should use them now if you are targeting iOS 9 and above. About the author Milton Moura (@mgcm) is a freelance iOS developer based in Portugal. He has worked professionally in several industries, from aviation to telecommunications and energy and is now fully dedicated to creating amazing applications using Apple technologies. With a passion for design and user interaction, he is also very interested in new approaches to software development. You can find out more at http://defaultbreak.com
Read more
  • 0
  • 0
  • 32824

article-image-integrating-messages-app
Packt
06 Apr 2017
17 min read
Save for later

Integrating with Messages App

Packt
06 Apr 2017
17 min read
In this article by Hossam Ghareeb, the author of the book, iOS Programming Cookbook, we will cover the recipe Integrating iMessage app with iMessage app. (For more resources related to this topic, see here.) Integrating iMessage app with iMessage app Using iMessage apps will let users use your apps seamlessly from iMessage without having to leave the iMessage. Your app can share content in the conversation, make payment, or do any specific job that seems important or is appropriate to do within a Messages app. Getting ready Similar to the Stickers app we created earlier, you need Xcode 8.0 or later version to create an iMessage app extension and you can test it easily in the iOS simulator. The app that we are going to build is a Google drive picker app. It will be used from an iMessage extension to send a file to your friends just from Google Drive. Before starting, ensure that you follow the instructions in Google Drive API for iOS from https://developers.google.com/drive/ios/quickstart to get a client key to be used in our app. Installing the SDK in Xcode will be done via CocoaPods. To get more information about CocoaPods and how to use it to manage dependencies, visit https://cocoapods.org/ . How to do it… We Open Xcode and create a new iMessage app, as shown, and name itFiles Picker:   Now, let's install Google Drive SDK in iOS using CocoaPods. Open terminal and navigate to the directory that contains your Xcode project by running this command: cd path_to_directory Run the following command to create a Pod file to write your dependencies: Pod init It will create a Pod file for you. Open it via TextEdit and edit it to be like this: use_frameworks! target 'PDFPicker' do end target 'MessagesExtension' do pod 'GoogleAPIClient/Drive', '~> 1.0.2' pod 'GTMOAuth2', '~> 1.1.0' end Then, close the Xcode app completely and run the pod install command to install the SDK for you. A new workspace will be created. Open it instead of the Xcode project itself. Prepare the client key from the Google drive app you created as we mentioned in the Getting ready section, because we are going to use it in the Xcode project. Open MessagesViewController.swift and add the following import statements: import GoogleAPIClient import GTMOAuth2 Add the following private variables just below the class declaration and embed your client key in the kClientID constant, as shown: private let kKeychainItemName = "Drive API" private let kClientID = "Client_Key_Goes_HERE" private let scopes = [kGTLAuthScopeDrive] private let service = GTLServiceDrive() Add the following code in your class to request authentication to Google drive if it's not authenticated and load file info: override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. if let auth = GTMOAuth2ViewControllerTouch.authForGoogleFromKeychain(forName: kKeychainItemName, clientID: kClientID, clientSecret: nil) { service.authorizer = auth } } // When the view appears, ensure that the Drive API service is authorized // and perform API calls override func viewDidAppear(_ animated: Bool) { if let authorizer = service.authorizer, canAuth = authorizer.canAuthorize where canAuth { fetchFiles() } else { present(createAuthController(), animated: true, completion: nil) } } // Construct a query to get names and IDs of 10 files using the Google Drive API func fetchFiles() { print("Getting files...") if let query = GTLQueryDrive.queryForFilesList(){ query.fields = "nextPageToken, files(id, name, webViewLink, webContentLink, fileExtension)" service.executeQuery(query, delegate: self, didFinish: #selector(MessagesViewController.displayResultWithTicket(ticket:finishedWit hObject:error:))) } } // Parse results and display func displayResultWithTicket(ticket : GTLServiceTicket, finishedWithObject response : GTLDriveFileList, if let error = error { showAlert(title: "Error", message: error.localizedDescription) return } var filesString = "" let files = response.files as! [GTLDriveFile] if !files.isEmpty{ filesString += "Files:n" for file in files{ filesString += "(file.name) ((file.identifier) ((file.webViewLink) ((file.webContentLink))n" } } else { filesString = "No files found." } print(filesString) } // Creates the auth controller for authorizing access to Drive API private func createAuthController() -> GTMOAuth2ViewControllerTouch { let scopeString = scopes.joined(separator: " ") return GTMOAuth2ViewControllerTouch( scope: scopeString, clientID: kClientID, clientSecret: nil, keychainItemName: kKeychainItemName, delegate: self, finishedSelector: #selector(MessagesViewController.viewController(vc:finishedWithAuth:error:) ) ) } // Handle completion of the authorization process, and update the Drive API // with the new credentials. func viewController(vc : UIViewController, finishedWithAuth authResult : GTMOAuth2Authentication, error : NSError?) { if let error = error { service.authorizer = nil showAlert(title: "Authentication Error", message: error.localizedDescription) return } service.authorizer = authResult dismiss(animated: true, completion: nil) fetchFiles() } // Helper for showing an alert func showAlert(title : String, message: String) { let alert = UIAlertController( title: title, message: message, preferredStyle: UIAlertControllerStyle.alert ) let ok = UIAlertAction( title: "OK", style: UIAlertActionStyle.default, handler: nil ) alert.addAction(ok) self.present(alert, animated: true, completion: nil) } The code now requests authentication, loads files, and then prints them in the debug area. Now, try to build and run, you will see the following: Click on the arrow button in the bottom right corner to maximize the screen and try to log in with any Google account you have. Once the authentication is done, you will see the files' information printed in the debug area. Now, let's add a table view that will display the files' information and once a user selects a file, we will download this file to send it as an attachment to the conversation. Now, open theMainInterface.storyboard, drag a table view from Object Library, and add the following constraints: Set the delegate and data source of the table view from interface builder by dragging while holding down the Ctrl key to theMessagesViewController. Then, add an outlet to the table view, as follows, to be used to refresh the table with the files:  Drag a UITabeView cell from Object Library and drop it in the table view. For Attribute Inspector, set the cell style to Basic and the identifier to cell. Now, return to MessagesViewController.swift. Add the following property to hold the current display files: private var currentFiles = [GTLDriveFile]() Edit the displayResultWithTicket function to be like this: // Parse results and display func displayResultWithTicket(ticket : GTLServiceTicket, finishedWithObject response : GTLDriveFileList, error : NSError?) { if let error = error { showAlert(title: "Error", message: error.localizedDescription) return } var filesString = "" let files = response.files as! [GTLDriveFile] self.currentFiles = files if !files.isEmpty{ filesString += "Files:n" for file in files{ filesString += "(file.name) ((file.identifier) ((file.webViewLink) ((file.webContentLink))n" } } else { filesString = "No files found." } print(filesString) self.filesTableView.reloadData() } Now, add the following method for the table view delegate and data source: // MARK: - Table View methods - func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return self.currentFiles.count } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "cell") let file = self.currentFiles[indexPath.row] cell?.textLabel?.text = file.name return cell! } func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { let file = self.currentFiles[indexPath.row] // Download File here to send as attachment. if let downloadURLString = file.webContentLink{ let url = NSURL(string: downloadURLString) if let name = file.name{ let downloadedPath = (documentsPath() as NSString).appendingPathComponent("(name)") let fetcher = service.fetcherService.fetcher(with: url as! URL) let destinationURL = NSURL(fileURLWithPath: downloadedPath) as URL fetcher.destinationFileURL = destinationURL fetcher.beginFetch(completionHandler: { (data, error) in if error == nil{ self.activeConversation?.insertAttachment(destinationURL, withAlternateFilename: name, completionHandler: nil) } }) } } } private func documentsPath() -> String{ let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true) return paths.first ?? "" } Now, build and run the app, and you will see the magic: select any file and the app will download and save it to the local disk and send it as an attachment to the conversation, as illustrated: How it works… We started by installing the Google Drive SDK to the Xcode project. This SDK has all the APIs that we need to manage drive files and user authentication. When you visit the Google developers' website, you will see two options to install the SDK: manually or using CocoaPods. I totally recommend using CocoaPods to manage your dependencies as it is simple and efficient. Once the SDK has been installed via CocoaPods, we added some variables to be used for the Google Drive API and the most important one is the client key. You can access this value from the project you have created in the Google Developers Console. In the viewDidLoad function, first, we check if we have an authentication saved in KeyChain, and then, we use it. We can do that by calling GTMOAuth2ViewControllerTouch.authForGoogleFromKeychain, which takes the Keychain name and client key as parameters to search for authentication. It's useful as it helps you remember the last authentication and there is no need to ask for user authentication again if a user has already been authenticated before. In viewDidAppear, we check if a user is already authenticated; so, in that case, we start fetching files from the drive and, if not, we display the authentication controller, which asks a user to enter his Google account credentials. To display the authentication controller, we present the authentication view controller created in the createAuthController() function. In this function, the Google Drive API provides us with the GTMOAuth2ViewControllerTouch class, which encapsulates all logic for Google account authentication for your app. You need to pass the client key for your project, keychain name to save the authentication details there, and the finished  viewController(vc : UIViewController, finishedWithAuth authResult : GTMOAuth2Authentication, error : NSError?) selector that will be called after the authentication is complete. In that function, we check for errors and if something wrong happens, we display an alert message to the user. If no error occurs, we start fetching files using the fetchFiles() function. In the fetchFiles() function, we first create a query by calling GTLQueryDrive.queryForFilesList(). The GTLQueryDrive class has all the information you need about your query, such as which fields to read, for example, name, fileExtension, and a lot of other fields that you can fetch from the Google drive. You can specify the page size if you are going to call with pagination, for example, 10 by 10 files. Once you are happy with your query, execute it by calling service.executeQuery, which takes the query and the finished selector to be called when finished. In our example, it will call the displayResultWithTicket function, which prepares the files to be displayed in the table view. Then, we call self.filesTableView.reloadData() to refresh the table view to display the list of files. In the delegate function of table view didSelectRowAt indexPath:, we first read the webContentLink property from the GTLDriveFile instance, which is a download link for the selected file. To fetch a file from the Google drive, the API provides us with GTMSessionFetcher that can fetch a file and write it directly to a device's disk locally when you pass a local path to it. To create GTMSessionFetcher, use the service.fetcherService factory class, which gives you instance to a fetcher via the file URL. Then, we create a local path to the downloaded file by appending the filename to the documents path of your app and then, pass it to fetcher via the following command: fetcher.destinationFileURL = destinationURL Once you set up everything, call fetcher.beginFetch and pass a completion handler to be executed after finishing the fetching. Once the fetching is completed successfully, you can get a reference to the current conversation so that you can insert the file to it as an attachment. To do this, just call the following function: self.activeConversation?.insertAttachment(destinationURL, withAlternateFilename: name, completionHandler: nil) There's more… Yes, there's more that you can do in the preceding example to make it fancier and more appealing to users. Check the following options to make it better: You can show a loading indicator or progress bar while a file is downloading. Checks if the file is already downloaded, and if so, there is no need to download it again. Adding pagination to request only 10 files at a time. Options to filter documents by type, such as PDF, images, or even by date. Search for a file in your drive. Showing Progress indicator As we said, one of the features that we can add in the preceding example is the ability to show a progress bar indicating the downloading progress of a file. Before starting how to show a progress bar, let's install a library that is very helpful in managing/showing HUD indicators, which is MBProgressHUD. This library is available in GitHub at https://github.com/jdg/MBProgressHUD. As we agreed before, all packages are managed via CocoaPods, so now, let's install the library via CocoaPods, as shown: Open the Podfile and update it to be as follows: use_frameworks! target 'PDFPicker' do end target 'MessagesExtension' do pod 'GoogleAPIClient/Drive', '~> 1.0.2' pod 'GTMOAuth2', '~> 1.1.0' pod 'MBProgressHUD', '~> 1.0.0' end Run the following command to install the dependencies: pod install Now, at the top of the MessagesViewController.swift file, add the following import statement to import the library: Now, let's edit the didSelectRowAtIndexPath function to be like this: func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { let file = self.currentFiles[indexPath.row] // Download File here to send as attachment. if let downloadURLString = file.webContentLink{ let url = NSURL(string: downloadURLString) if let name = file.name{ let downloadedPath = (documentsPath() as NSString).appendingPathComponent("(name)") let fetcher = service.fetcherService.fetcher(with: url as! URL) let destinationURL = NSURL(fileURLWithPath: downloadedPath) as URL fetcher.destinationFileURL = destinationURL var progress = Progress() let hud = MBProgressHUD.showAdded(to: self.view, animated: true) hud.mode = .annularDeterminate; hud.progressObject = progress fetcher.beginFetch(completionHandler: { (data, error) in if error == nil{ hud.hide(animated: true) self.activeConversation?.insertAttachment(destinationURL, withAlternateFilename: name, completionHandler: nil) } }) fetcher.downloadProgressBlock = { (bytes, written, expected) in let p = Double(written) * 100.0 / Double(expected) print(p) progress.totalUnitCount = expected progress.completedUnitCount = written } } } } First, we create an instance of MBProgressHUD and set its type to annularDeterminate, which means to display a circular progress bar. HUD will update its progress by taking a reference to the NSProgress object. Progress has two important variables to determine the progress value, which are totalUnitCount and completedUnitCount. These two values will be set inside the progress completion block, downloadProgressBlock, in the fetcher instance. HUD will be hidden in the completion block that will be called once the download is complete. Now build and run; after authentication, when you click on a file, you will see something like this: As you can see, the progressive view is updated with the percentage of download to give the user an overview of what is going on. Request files with pagination Loading all files at once is easy from the development side, but it's incorrect from the user experience side. It will take too much time at the beginning when you get the list of all the files and it would be great if we could request only 10 files at a time with pagination. In this section, we will see how to add the pagination concept to our example and request only 10 files at a time. When a user scrolls to the end of the list, we will display a loading indicator, call the next page, and append the results to our current results. Implementation of pagination is pretty easy and requires only a few changes in our code. Let's see how to do it: We will start by adding the progress cell design in MainInterface.storyboard. Open the design of MessagesViewController and drag a new cell along with our default cell. Drag a UIActivityIndicatorView from ObjectLibrary and place it as a subview to the new cell. Add center constraints to center it horizontally and vertically as shown: Now, select the new cell and go to attribute inspector to add an identifier to the cell and disable the selection as illustrated: Now, from the design side, we are ready. Open MessagesViewController.swift to add some tweaks to it. Add the following two variables to the list of our current variables: private var doneFetchingFiles = false private var nextPageToken: String! The doneFetchingFiles flag will be used to hide the progress cell when we try to load the next page from Google Drive and returns an empty list. In that case, we know that we are done with the fetching files and there is no need to display the progress cell any more. The nextPageToken contains the token to be passed to the GTLQueryDrive query to ask it to load the next page. Now, go to the fetchFiles() function and update it to be as shown: func fetchFiles() { print("Getting files...") if let query = GTLQueryDrive.queryForFilesList(){ query.fields = "nextPageToken, files(id, name, webViewLink, webContentLink, fileExtension)" query.mimeType = "application/pdf" query.pageSize = 10 query.pageToken = nextPageToken service.executeQuery(query, delegate: self, didFinish: #selector(MessagesViewController.displayResultWithTicket(ticket:finishedWit hObject:error:))) } } The only difference you can note between the preceding code and the one before that is setting the pageSize and pageToken. For pageSize, we set how many files we require for each call and for pageToken, we pass the token to get the next page. We receive this token as a response from the previous page call. This means that, at the first call, we don't have a token and it will be passed as nil. Now, open the displayResultWithTicket function and update it like this: // Parse results and display func displayResultWithTicket(ticket : GTLServiceTicket, finishedWithObject response : GTLDriveFileList, error : NSError?) { if let error = error { showAlert(title: "Error", message: error.localizedDescription) return } var filesString = "" nextPageToken = response.nextPageToken let files = response.files as! [GTLDriveFile] doneFetchingFiles = files.isEmpty self.currentFiles += files if !files.isEmpty{ filesString += "Files:n" for file in files{ filesString += "(file.name) ((file.identifier) ((file.webViewLink) ((file.webContentLink))n" } } else { filesString = "No files found." } print(filesString) self.filesTableView.reloadData() } As you can see, we first get the token that is to be used to load the next page. We get it by calling response.nextPageToken and setting it to our new  nextPageToken property so that we can use it while loading the next page. The doneFetchingFiles will be true only if the current page we are loading has no files, which means that we are done. Then, we append the new files we get to the current files we have. We don't know when to fire the calling of the next page. We will do this once the user scrolls down to the refresh cell that we have. To do so, we will implement one of the UITableViewDelegate methods, which is willDisplayCell, as illustrated: func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) { if !doneFetchingFiles && indexPath.row == self.currentFiles.count { // Refresh cell fetchFiles() return } } For any cell that is going to be displayed, this function will be triggered with indexPath of the cell. First, we check if we are not done with the fetching files and the row is equal to the last row, then, we fire fetchFiles() again to load the next page. As we added a new refresh cell at the bottom, we should update our UITableViewDataSource functions, such as numbersOfRowsInSection and cellForRow. Check our updated functions, shown as follows: func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return doneFetchingFiles ? self.currentFiles.count : self.currentFiles.count + 1 } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { if !doneFetchingFiles && indexPath.row == self.currentFiles.count{ return tableView.dequeueReusableCell(withIdentifier: "progressCell")! } let cell = tableView.dequeueReusableCell(withIdentifier: "cell") let file = self.currentFiles[indexPath.row] cell?.textLabel?.text = file.name return cell! } As you can see, the number of rows will be equal to the current files' count plus one for the refresh cell. If we are done with the fetching files, we will return only the number of files. Now, everything seems perfect. When you build and run, you will see only 10 files listed, as shown: And when you scroll down you would see the progress cell that and 10 more files will be called. Summary In this article, we learned how to integrate iMessage app with iMessage app. Resources for Article: Further resources on this subject: iOS Security Overview [article] Optimizing JavaScript for iOS Hybrid Apps [article] Testing our application on an iOS device [article]
Read more
  • 0
  • 0
  • 32818

article-image-building-arcore-android-application
Sugandha Lahoti
24 Apr 2018
9 min read
Save for later

Getting started with building an ARCore application for Android

Sugandha Lahoti
24 Apr 2018
9 min read
Google developed ARCore to be accessible from multiple development platforms (Android [Java], Web [JavaScript], Unreal [C++], and Unity [C#]), thus giving developers plenty of flexibility and options to build applications on various platforms. While each platform has its strengths and weaknesses, all the platforms essentially extend from the native Android SDK that was originally built as Tango. This means that regardless of your choice of platform, you will need to install and be somewhat comfortable working with the Android development tools. In this article, we will focus on setting up the Android development tools and building an ARCore application for Android. The following is a summary of the major topics we will cover in this post: Installing Android Studio Installing ARCore Build and deploy Exploring the code Installing Android Studio Android Studio is a development environment for coding and deploying Android applications. As such, it contains the core set of tools we will need for building and deploying our applications to an Android device. After all, ARCore needs to be installed to a physical device in order to test. Follow the given instructions to install Android Studio for your development environment: Open a browser on your development computer to https://developer.android.com/studio. Click on the green DOWNLOAD ANDROID STUDIO button. Agree to the Terms and Conditions and follow the instructions to download. After the file has finished downloading, run the installer for your system. Follow the instructions on the installation dialog to proceed. If you are installing on Windows, ensure that you set a memorable installation path that you can easily find later, as shown in the following example: Click through the remaining dialogs to complete the installation. When the installation is complete, you will have the option to launch the program. Ensure that the option to launch Android Studio is selected and click on Finish. Android Studio comes embedded with OpenJDK. This means we can omit the steps to installing Java, on Windows at least. If you are doing any serious Android development, again on Windows, then you should go through the steps on your own to install the full Java JDK 1.7 and/or 1.8, especially if you plan to work with older versions of Android. On Windows, we will install everything to C:Android; that way, we can have all the Android tools in one place. If you are using another OS, use a similar well-known path. Now that we have Android Studio installed, we are not quite done. We still need to install the SDK tools that will be essential for building and deployment. Follow the instructions in the next exercise to complete the installation: If you have not installed the Android SDK before, you will be prompted to install the SDK when Android Studio first launches, as shown: Select the SDK components and ensure that you set the installation path to a well-known location, again, as shown in the preceding screenshot. Leave the Welcome to Android Studio dialog open for now. We will come back to it in a later exercise. That completes the installation of Android Studio. In the next section, we will get into installing ARCore. Installing ARCore Of course, in order to work with or build any ARCore applications, we will need to install the SDK for our chosen platform. Follow the given instructions to install the ARCore SDK: We will use Git to pull down the code we need directly from the source. You can learn more about Git and how to install it on your platform at https://git-scm.com/book/en/v2/Getting-Started-Installing-Git or use Google to search: getting started installing Git. Ensure that when you install on Windows, you select the defaults and let the installer set the PATH environment variables. Open Command Prompt or Windows shell and navigate to the Android (C:Android on Windows) installation folder. Enter the following command: git clone https://github.com/google-ar/arcore-android-sdk.git This will download and install the ARCore SDK into a new folder called arcore-android-sdk, as illustrated in the following screenshot: Ensure that you leave the command window open. We will be using it again later. Installing the ARCore service on a device Now, with the ARCore SDK installed on our development environment, we can proceed with installing the ARCore service on our test device. Use the following steps to install the ARCore service on your device: NOTE: this step is only required when working with the Preview SDK of ARCore. When Google ARCore 1.0 is released you will not need to perform this step. Grab your mobile device and enable the developer and debugging options by doing the following: Opening the Settings app Selecting the System Scrolling to the bottom and selecting About phone Scrolling again to the bottom and tapping on Build number seven times Going back to the previous screen and selecting Developer options near the bottom Selecting USB debugging Download the ARCore service APK from https://github.com/google-ar/arcore-android-sdk/releases/download/sdk-preview/arcore-preview.apk to the Android installation folder (C:Android). Also note that this URL will likely change in the future. Connect your mobile device with a USB cable. If this is your first time connecting, you may have to wait several minutes for drivers to install. You will then be prompted to switch on the device to allow the connection. Select Allow to enable the connection. Go back to your Command Prompt or Windows shell and run the following command: adb install -r -d arcore-preview.apk //ON WINDOWS USE: sdkplatform-toolsadb install -r -d arcore-preview.apk After the command is run, you will see the word Success. This completes the installation of ARCore for the Android platform. In the next section, we will build our first sample ARCore application. Build and deploy Now that we have all the tedious installation stuff out of the way, it's time to build and deploy a sample app to your Android device. Let's begin by jumping back to Android Studio and following the given steps: Select the Open an existing Android Studio project option from the Welcome to Android Studio window. If you accidentally closed Android Studio, just launch it again. Navigate and select the Androidarcore-android-sdksamplesjava_arcore_hello_ar folder, as follows: Click on OK. If this is your first time running this project, you will encounter some dependency errors, such as the one here: In order to resolve the errors, just click on the link at the bottom of the error message. This will open a dialog, and you will be prompted to accept and then download the required dependencies. Keep clicking on the links until you see no more errors. Ensure that your mobile device is connected and then, from the menu, choose Run - Run. This should start the app on your device, but you may still need to resolve some dependency errors. Just remember to click on the links to resolve the errors. This will open a small dialog. Select the app option. If you do not see the app option, select Build - Make Project from the menu. Again, resolve any dependency errors by clicking on the links. "Your patience will be rewarded." - Alton Brown Select your device from the next dialog and click on OK. This will launch the app on your device. Ensure that you allow the app to access the device's camera. The following is a screenshot showing the app in action: Great, we have built and deployed our first Android ARCore app together. In the next section, we will take a quick look at the Java source code. Exploring the code Now, let's take a closer look at the main pieces of the app by digging into the source code. Follow the given steps to open the app's code in Android Studio: From the Project window, find and double-click on the HelloArActivity, as shown: After the source is loaded, scroll through the code to the following section: private void showLoadingMessage() { runOnUiThread(new Runnable() { @Override public void run() { mLoadingMessageSnackbar = Snackbar.make( HelloArActivity.this.findViewById(android.R.id.content), "Searching for surfaces...", Snackbar.LENGTH_INDEFINITE); mLoadingMessageSnackbar.getView().setBackgroundColor(0xbf323232); mLoadingMessageSnackbar.show(); } }); } Note the highlighted text—"Searching for surfaces..". Select this text and change it to "Searching for ARCore surfaces..". The showLoadingMessage function is a helper for displaying the loading message. Internally, this function calls runOnUIThread, which in turn creates a new instance of Runnable and then adds an internal run function. We do this to avoid thread blocking on the UI, a major no-no. Inside the run function is where the messaging is set and the message Snackbar is displayed. From the menu, select Run - Run 'app' to start the app on your device. Of course, ensure that your device is connected by USB. Run the app on your device and confirm that the message has changed. Great, now we have a working app with some of our own code. This certainly isn't a leap, but it's helpful to walk before we run. In this article, we started exploring ARCore by building and deploying an AR app for the Android platform. We did this by first installing Android Studio. Then, we installed the ARCore SDK and ARCore service onto our test mobile device. Next, we loaded up the sample ARCore app and patiently installed the various required build and deploy dependencies. After a successful build, we deployed the app to our device and tested. Finally, we tested making a minor code change and then deployed another version of the app. You read an excerpt from the book, Learn ARCore - Fundamentals of Google ARCore, written by Micheal Lanham. This book will help you will create next-generation Augmented Reality and Mixed Reality apps with the latest version of Google ARCore. Read More Google ARCore is pushing immersive computing forward Types of Augmented Reality targets
Read more
  • 0
  • 0
  • 31924

article-image-creating-a-custom-layout-implementation-for-your-android-app
Aarthi Kumaraswamy
06 Apr 2018
5 min read
Save for later

Creating a custom layout implementation for your Android app

Aarthi Kumaraswamy
06 Apr 2018
5 min read
In most applications, you'll find that a combination of the ConstraintLayout, CoordinatorLayout, and some of the more primitive layout classes (such as LinearLayout and FrameLayout) are more than enough to achieve any layout requirements you can dream up for your user interface. Every now and again though, you'll find yourself needing a custom layout manager to achieve an effect required for the application. Layout classes extend from the ViewGroup class, and their job is to tell their child widgets where to position themselves, and how large they should be. They do this in two phases: the measurement phase and the layout phase. All View implementations are expected to provide measurements for their actual size according to specifications. These measurements are then used by the View widget's parent ViewGroup to allocate the amount of space the widget will consume on the screen. For example, a View might be told to consume, at most, the screen width. The View must then determine how much of that space it actually requires, and records that size in its measured dimensions. The measured dimensions are then used by the parent ViewGroup during the layout process. The second phase is the layout phase, and it is conducted by the ViewGroup parent of each View widget. This phase positions the View on the screen, relative to its parent ViewGroup location, and specifies the actual size that the widget will consume on the screen (typically based on the measured size calculated in the measurement phase). When you implement your own ViewGroup, you'll need to ensure that all of your child View widgets are given a chance to measure themselves before you perform the actual layout operations. Let's build a layout class to arrange its children in a circle. To keep the implementation simple, we'll assume that all the child widgets are the same size (for example, if they were all icons): Right-click on the widget package in the travel claim example app, and select New|Java Class. Name the new class CircleLayout. Change the Superclass to android.view.ViewGroup. Click OK to create the new class. Declare the standard ViewGroup constructors: public CircleLayout(final Context context) {  super(context); } public CircleLayout(    final Context context,    final AttributeSet attrs) {  super(context, attrs); } public CircleLayout(      final Context context,      final AttributeSet attrs,      final int defStyleAttr) {  super(context, attrs, defStyleAttr); } Override the onMeasure method to calculate the size of the CircleLayout and all of its child Viewwidgets. The measurement specifications are passed in as int values, which are interpreted using the staticmethods in the MeaureSpec class. Measurement specifications come in two flavors: at most and exactly, and each has a size value attached. In this particular layout, we always measure the CircleLayout as the size given in the specification. This means that the CircleLayout will always consume the maximum amount of space available. It also expects all of its children to be able to specify sizes without the match_parent attribute (as this will cause each child to take up all the available space): @Override protected void onMeasure(    final int widthMeasureSpec,    final int heightMeasureSpec) {  super.onMeasure(widthMeasureSpec, heightMeasureSpec);  measureChildren(widthMeasureSpec, heightMeasureSpec);  setMeasuredDimension(        MeasureSpec.getSize(widthMeasureSpec),        MeasureSpec.getSize(heightMeasureSpec)); } The next method to implement is the onLayout method. This performs the actual arrangement of the child View widget within the CircleLayout, by invoking their layout method. The layout method should never be overridden, because it's closely tied to the platform and performs several other important actions (such as notifying layout listeners). Instead, you should override onLayout, but invoking layout.CircleLayoutassumes that all the child View widgets are of the same size (and forces this as part of the onLayoutimplementation). This onLayout method simply calculates the available space, and then positions the child View widgets in a circle around the outside edge: protected void onLayout( final boolean changed, final int left, final int top, final int right, final int bottom) { final int childCount = getChildCount(); if (childCount == 0) { return; } final int width = right - left; final int height = bottom - top; // if we have children, we assume they're all the same size final int childrenWidth = getChildAt(0).getMeasuredWidth(); final int childrenHeight = getChildAt(0).getMeasuredHeight(); final int boxSize = Math.min( width - childrenWidth, height - childrenHeight); for (int i = 0; i < childCount; i++) { final View child = getChildAt(i); final int childWidth = child.getMeasuredWidth(); final int childHeight = child.getMeasuredHeight(); final double x = Math.sin((Math.PI * 2.0) * ((double) i / (double) childCount)); final double y = -Math.cos((Math.PI * 2.0) * ((double) i / (double) childCount)); final int childLeft = (int) (x * (boxSize / 2)) + (width / 2) - (childWidth / 2); final int childTop = (int) (y * (boxSize / 2)) + (height / 2) - (childHeight / 2); final int childRight = childLeft + childWidth; final int childBottom = childTop + childHeight; child.layout(childLeft, childTop, childRight, childBottom); } } Although the implementation of the onLayout method is quite long, it's also relatively simple. Most of the code is concerned with determining the desired position of the child View widgets. Layout code needs to execute as quickly as possible, and should avoid allocating any objects during the onMeasure and onLayout methods (similar to the rules of onDraw). Layout is a critical part of building the screen from a performance standpoint, because no rendering can actually occur without the layout being completed. The layout will also be rerun every time the layout changes its structure. For example, if you add or remove any child View widgets, or change the size or position of the ViewGroup. Changing the size of a ViewGroup might happen on every frame if you use a CoordinatorLayout, where the ViewGroup is being collapsed (or if you change its size as part of a property-animation). You read an excerpt from the book, Hands-On Android UI Development by Jason Morris. For more recipes on cutting edge Android UI tasks such as creating themes, animations, custom widgets and more, give this book a try.  
Read more
  • 0
  • 0
  • 31807
article-image-introduction-sql-and-sqlite
Packt
10 Feb 2016
22 min read
Save for later

Introduction to SQL and SQLite

Packt
10 Feb 2016
22 min read
In this article by Gene Da Rocha, author or the book Learning SQLite for iOS we are introduced to the background of the Structured Query Language (SQL) and the mobile database SQLite. Whether you are an experienced technologist at SQL or a novice, using the book will be a great aid to help you understand this cool subject, which is gaining momentum. SQLite is the database used on the mobile smartphone or tablet that is local to the device. SQLite has been modified by different vendors to harden and secure it for a variety of uses and applications. (For more resources related to this topic, see here.) SQLite was released in 2000 and has grown to be as a defacto database on a mobile or smartphone today. It is an open source piece of software with a low footprint or overhead, which is packaged with a relational database management system. Mr D. Richard Hipp is the inventor and author for SQLite, which was designed and developed on a battleship while he was at a company called General Dynamics at the U. S. Navy. The programming was built for a HP-UX operating system with Informix as the database engine. It took many hours in the data to upgrade or install the database software and was an over-the-top database for this experience DBA (database administrator). Mr Hipp wanted a portable, self-contained, easy-to-use database, which could be mobile, quick to install, and not dependent on the operating. Initially, SQLite 1.0 used the gdbm as its storage system, but later, it was replaced with its own B-tree implementation and technology for the database. The B-tree implementation was enhanced to support transactions and store rows of data with key order. By 2001 onwards, open source family extensions for other languages, such as Java, Python, and Perl, were written to support their applications. The database and its popularity within the open source community and others were growing. Originally based upon relational algebra and tuple relational calculus, SQL consists of a data definition and manipulation language. The scope of SQL includes data insert, query, update and delete, schema creation and modification, and data access control. Although SQL is often described as, and to a great extent is, a declarative language (4GL), it also includes procedural elements. Internationalization supported UTF-16 and UTF-8 and included text-collating sequences in version 2 and 3 in 2004. It was supported by funding from AOL (America Online) in 2004. It works with a variety of browsers, which sometimes have in-built support for this technology. For example, there are so many extensions that use Chrome or Firefox, which allow you to manage the database. There have been many features added to this product. The future with the growth in mobile phones sets this quick and easy relational database system to quantum leap its use within the mobile and tablet application space. SQLite is based on the PostgreSQL as a point of reference. SQLite does not enforce any type checking. The schema does not constrain it since the type of value is dynamic, and a trigger will be activated by converting the data type. About SQL In June 1970, a research paper was published by Dr. E.F. Codd called A Relational Model of Data for Large Shared Data Banks. The Association of Computer Machinery (ACM) accepted Codd data and technology model, which has today become the standard for the RDBMS (Relational Database Management System). IBM Corporation had invented the language called by Structured English Query Language (SEQUEL), where the word "English" was dropped to become SQL. SQL is still pronounced as what has today become the standard for the RDBMS (Relational Database Management System) had a product called which has today become the SQL technology, followed by Oracle, Sybase and Microsoft's SQL Server. The standard commercial relational database management system language today is SQL (SEQUEL). Today, there are ANSI standards for SQL, and there are many variations of this technology. Among the mentioned manufacturers, there are also others available in the open source world, for example, an SQL query engine such as Presto. This is the distribution engine for SQL under open source, which is made to execute interactive analytic queries. Presto queries are run under databases from a variety of data source sizes—gigabytes to petabytes. Companies such as Facebook and Dropbox use the Presto SQL engine for their queries and analytics in data warehouse and related applications. SQL is made up of a data manipulation and definition language built with tuple and algebra calculation in a relational format. The SQL language has a variety of statements but most would recognize the INSERT, SELECT, UPDATE and DELETE statements. These statements form a part of the database schema management process and aid the data access and security access. SQL includes procedural elements as part of its setup. Is SQLite used anywhere? Companies may use applications but they are not aware of the SQL engines that drive their data storage and information. Although, it has become a standard with the American National Standards Institute (ANSI) in 1986, SQL features and functionality are not 100% portable among different SQL systems and require code changes to be useful. These standards are always up for revision to ensure ANSI is maintained. There are many variants of SQL engines on the market from companies, such as Oracle, SQL Server (Microsoft), DB2 (IBM), Sybase (SAP), MYSQL (Oracle), and others. Different companies operate several types of pricing structures, such as free open source, or a paid per seat or by transactions or server types or loads. Today, there is a preference for using server technology and SQL in the cloud with different providers, for example, Amazon Web Services (AWS). SQLite, as it names suggests, is SQL in a light environment, which is also flexible and versatile. Enveloped and embedded database among other processes SQLite has been designed and developed to work and coexist with other applications and processes in its area. RDBMS is tightly integrated with the native application software, which requires storing information but is masked and hidden from users, and it requires minimal administration or maintenance. SQLite can work with different API hidden from users and requires minimal administration or maintenance areas. RDBMS is intertwined with other applications; that is, it requires minimal supervision; there is no network traffic; no network access conflicts or configuration; no access limitations with privileges or permissions; and a large reduced overhead. These make it easier and quicker to deploy your applications to the app stores or other locations. The different components work seamlessly together in a harmonized way to link up data with the SQLite library and other processes. These show how the Apache process and the C/C++ process work together with the SQLite-C library to interface and link with it so that it becomes seamless and integrates with the operating system. SQLite has been developed and integrated in such a way that it will interface and gel with a variety of applications and multiple solutions. As a lightweight RDBMS, it can stand on its own by its versatility and is not cumbersome or too complex to benefit your application. It can be used on many platforms and comes with a binary compatible format, which is easier to dovetail within your mobile application. The different types of I.T. professionals will be involved with SQLite since it holds the data, affects performance, and involves database design, user or mobile interface design specialists, analysts and consultancy types. These professionals could use their previous knowledge of SQL to quickly grasp SQLite. SQLite can act as both data processor for information or deal with data in memory to perform well. The different software pieces of a jigsaw can interface properly by using the C API interface to SQLite, which some another programming language code. For example, C or C++ code can be programmed to communicate with the SQLITE C API, which will then talk to the operating system, and thus communicate with the database engine. Another language such as PHP can communicate using its own language data objects, which will in turn communicate with the SQLite C API and the database. SQLite is a great database to learn especially for computer scientists who want to use a tool that can open your mind to investigate caching, B-Tree structures and algorithms, database design architecture, and other concepts. The architecture of the SQLite database As a library within the OS-Interface, SQLite will have many functions implemented through a programming called tclsqlite.c. Since many technologies and reserved words are used, to language, and in this case, it will have the C language. The core functions are to be found in main.c, legacy.c, and vmbeapi.c. There is also a source code file in C for the TCL language to avoid any confusion; the prefix of sqlite3 is used at the beginning within the SQLite library. The Tokeniser code base is found within tokenize.c. Its task is to look at strings that are passed to it and partition or separate them into tokens, which are then passed to the parser. The Parser code base is found within parse.y. The Lemon LALR(1) parser generator is the parser for SQLite; it uses the context of tokens and assigns them a meaning. To keep within the low-sized footprint of RDBMS, only one C file is used for the parse generator. The Code Generator is then used to create SQL statements from the outputted tokens of the parser. It will produce virtual machine code that will carry out the work of the SQL statements. Several files such as attach.c, build.c, delete.c, select.c, and update.c will handle the SQL statements and syntax. Virtual machine executes the code that is generated from the Code Generator. It has in-built storage where each instruction may have up to three additional operands as a part of each code. The source file is called vdbe.c, which is a part of the SQLite database library. Built-in is also a computing engine, which has been specially created to integrate with the database system. There are two header files for virtual machine; the header files that interface a link between the SQLite libraries are vdbe.h and vdbeaux.c, which have utilities used by other modules. The vdbeapi.c file also connects to virtual machine with sqlite_bind and other related interfaces. The C language routines are called from the SQL functions that reference them. For example, functions such as count() are defined in func.c and date functions are located in date.c. B-tree is the type of table implementation used in SQLite; and the C source file is btree.c. The btree.h header file defines the interface to the B-tree system. There is a different B-tree setup for every table and index and held within the same file. There is a header portion within the btree.c, which will have details of the B-tree in a large comment field. The Pager or Page Cache using the B-tree will ask for data in a fixed sized format. The default size is 1024 bytes, which can be between 512 and 65536 bytes. Commit and Rollback operations, coupled with the caching, reading, and writing of data are handled by Page Cache or Pager. Data locking mechanisms are also handled by the Page Cache. The C file page.c is implemented to handle requests within the SQLite library and the header file is pager.h. The OS Interface C file is defined in os.h. It addresses how SQLite can be used on different operating systems and become transparent and portable to the user thus, becoming a valuable solution for any developer. An abstract layer to handle Win32 and POSIX compliant systems is also in place. Different operating systems have their own C file. For example, os_win.c is for Windows, os_unix.c is for Unix, coupled with their own os_win.h and os_unix.h header files. Util.c is the C file that will handle memory allocation and string comparisons. The Utf.c C file will hold the Unicode conversion subroutines. The Utf.c C file will hold the Unicode data, sort it within the SQL engine, and use the engine itself as a mechanism for computing data. Since the memory of the device is limited and the database size has the same constraints, the developer has to think outside the box to use these techniques. These types of memory and resource management form a part of the approach when the overlay techniques were used in the past when disk and memory was limited.   SELECT parameter1, STTDEV(parameter2)       FROM Table1 Group by parameter1       HAVING parameter1 > MAX(parameter3) IFeatures As part of its standards, SQLite uses and implements most of the SQL-92 standards, but not all the potential features or parts of functionality are used or realized. For example, the SQLite uses and implements most of the SQL-92 standards but not all potent columns. The support for triggers is not 100% as it cannot write output to views, but as a substitute, the INSTEAD OF statement can be used. As mentioned previously, the use of a type for a column is different; most relational database systems assign them to individual values. SQLite will convert a string into an integer if the columns preferred type is an integer. It is a good piece of functionality when bound to this type of scripting language, but the technique is not portable to other RDBMS systems. It also has its criticisms for not having a good data integrity mechanism compared to others in relation to statically typed columns. As mentioned previously, it has many bindings to many languages, such as Basic, C, C#, C++, D, Java, JavaScript, Lua, PHP, Objective-C, Python, Ruby, and TCL. Its popularity by the open source community and its usage by customers and developers have enabled its growth to continue. This lightweight RDBMS can be used on Google Chrome, Firefox, Safari, Opera, and the Android Browsers and has middleware support using ADO.NET, ODBC, COM (ActiveX), and XULRunner. It also has the support for web application frameworks such as Django (Python-based), Ruby on Rails, and Bugzilla (Mozilla). There are other applications such as Adobe Photoshop Light, which uses SQLite and Skype. It is also part of the Windows 8, Symbian OS, Android, and OpenBSD operating. Apple also included it via API support via OSXvia OSXother applications like Adobe Photoshop Light. Apart from not having the large overhead of other database engines, SQLite has some major enhancements such as the EXPLAIN keyword with its manifest typing. To control constraint conflicts, the REPLACE and ON CONFLICT statements are used. Within the same query, multiple independent databases can be accessed using the DETACH and ATTACH statements. New SQL functions and collating sequences can be created using the predefined API's, which offer much more flexibility. As there is no configuration required, SQLite just does the job and works. There is no need to initialize, stop, restart, or start server processes and no administrator is required to create the database with proper access control or security permits. After any failure, no user actions are required to recover the database since it is self-repairing: SQLite is more advanced than is thought of in the first place. Unlike other RDBMS, it does not require a server setup via a server to serve up data or incur network traffic costs. There are no TCP/IP calls and frequent communication backwards or forwards. SQLite is direct; the operating system process will deal with database access to its file; and control database writes and reads with no middle-man process handshaking. By having no server backend, the process of installation, configuration, or administration is reduced significantly and the access to the database is granted to programs that require this type of data operations. This is an advantage in one way but is also a disadvantage for security and protection from data-driven misuse and data concurrency or data row locking mechanisms. It also allows the database to be accessed several times by different applications at the same time. It supports a form of portability for the cross-platform database file that can be located with the database file structure. The database file can be updated on one system and copied to another on either 32 bit or 64 bit with different architectures. This does not make a difference to SQLite. The usage of different architecture and the promises of developers to keep the file system stable and compatible with the previous, current, and future developments will allow this database to grow and thrive. SQLite databases don't need to upload old data to the new formatted and upgraded databases; it just works. By having a single disk file for the database, the information can be copied on a USB and shared or just reused on another device very quickly keeping all the information intact. Other RDBMS single-disk file for the database; the information can be copied on a USB and shared or just reused on another device very quickly keeping all the information in tact to grow and thrive. Another feature of this portable database is its size, which can start on a single 512-byte page and expand to 2147483646 pages at 65536 bytes per page or in bytes 140,737,488,224,256, which equates to about 140 terabytes. Most other RDBMS are much larger, but IBM's Cloudscape is small with a 2MB jar file. It is still larger than SQLite. The Firebird alternative's client (frontend) library is about 350KB, whereas the Berkeley Oracle database is around 450kb without SQL support and with one simple key/value pair's option. This advanced portable database system and its source code is in the public domain. They have no copyright or any claim on the source code. However, there are open source license issues and controls for some test code and documentation. This is great news for developers who might want to code up new extensions or database functionality that works with their programs, which could be made into a 'product extension' for SQLite. You cannot have this sort of access to SQL source code around since everything has a patent, limited access, or just no access. There are signed affidavits by developers to disown any copyright interest in the SQLite code. SQLite is different, because it is just not governed or ruled by copyright law; the way software should really work or it used. There are signed affidavits by developers to disown any copyright interest in the SQLite code. This means that you can define a column with a datatype of integer, but its property is dictated by the inputted values and not the column itself. This can allow any value to be stored in any declared data type for this column with the exception of an integer primary key. This feature would suit TCL or Python, which are dynamically typed programming languages. When you allocate space in most RDBMS in any declared char(50), the database system will allocate the full 50 bytes of disk space even if you do not allocate the full 50 bytes of disk space. So, out of char(50) sized column, three characters were used, then the disk space would be only three characters plus two for overhead including data type, length but not 50 characters such as other database engines. This type of operation would reduce disk space usage and use only what space was required. By using the small allocation with variable length records, the applications runs faster, the database access is quicker, manifest typing can be used, and the database is small and nimble. The ease of using this RDBMS makes it easier for most programmers at an intermediate level to create applications using this technology with its detailed documentation and examples. Other RDBMS are internally complex with links to data structures and objects. SQLite comprises using a virtual machine language that uses the EXPLAIN reserved word in front of a query. Virtual machine has increased and benefitted this database engine by providing an excellent process or controlled environment between back end (where the results are computed and outputted) and the front end (where the SQL is parsed and executed). The SQL implementation language is comparable to other RDBMS especially with its lightweight base; it does support recursive triggers and requires the FOR EACH row behavior. The FOR EACH statement is not currently supported, but functionality cannot be ruled out in the future. There is a complete ALTER TABLE support with some exceptions. For example, the RENAME TABLE, ADD COLUMN, or ALTER COLUMN is supported, but the DROP COLUMN, ADD CONSTRAINT, or ALTER COLUMN is not supported. Again, this functionality cannot be ruled out in the future. The RIGHT OUTER JOIN and FULL OUTER JOIN are not support, but the RIGHT OUTER JOIN, FULL OUTER JOIN, and LEFT OUTER JOIN are implemented. The views within this RDBMS are read only. As described so far in the this article, SQLite is a nimble and easy way to use database that developers can engage with quickly, use existing skills, and output systems to mobile devices and tablets far simpler than ever before. With the advantage of today's HTML5 and other JavaScript frameworks, the advancement of SQL and the number of SQLite installations will quantum leap. Working with SQLite The website for SQLite is www.sqlite.org where you can download all the binaries for the database, documentation, and source code, which works on operating systems such as Linux, Windows and MAC OS X. The SQLite share library or DLL is the library to be used for the Windows operating system and can be installed or seen via Visual Studio with the C++ language. So, the developer can write the code using the library that is presently linked in reference via the application. When execution has taken place, the DLL will load and all references in the code will link to those in the DLL at the right time. The SQLite3 command-line program, CLP, is a self-contained program that has all the components built in for you to run at the command line. It also comes with an extension for TCL. So within TCL, you can connect and update the SQLite database. SQLite downloads come with the TAR version for Unix systems and the ZIP version for Windows systems. iOS with SQLite On the hundreds of thousands of apps on all the app stores, it would be difficult to find the one that does not require a database of some sort to store or handle data in a particular way. There are different formats of data called datafeeds, but they all require some temporary or permanent storage. Small amounts of data may not be applicable but medium or large amounts of data will require a storage mechanism such as a database to assist the app. Using SQLite with iOS will enable developers to use their existing skills to run their DBMS on this platform as well. For SQLite, there is the C-library that is embedded and available to use with iOS with the Xcode IDE. Apple fully supports SQLite, which uses an include statement as a part of the library call, but there is not easy made mechanism to engage. Developers also tend to use FMDB—a cocoa/objective-C wrapper around SQLite. As SQLite is fast and lightweight, its usage of existing SQL knowledge is reliable and supported by Apple on Mac OS and iOS and support from many developers as well as being integrated without much outside involvement. The third SQLite library is under the general tab once the main project name is highlighted on the left-hand side. Then, at the bottom of the page or within the 'Linked Frameworks and Library', click + and a modal window appears. Enter the word sqlite and select sqlite; then, select the libsqlite3.dylib library. This one way to set up the environment to get going. In effect, it is the C++ wrapper called the libsqlite3.dylib library within the framework section, which allows the API to work with the SQLite commands. The way in which a text file is created in iOS is the way SQLite will be created. It will use the location (document directory) to save the file that is the one used by iOS. Before anything can happen, the database must be opened and ready for querying and upon the success of data, the constant SQLITE_OK is set to 0. In order to create a table in the SQLite table using the iOS connection and API, the method sqlite3_exec is set up to work with the open sqlite3 object and the create table SQL statement with a callback function. When the callback function is executed and a status is returned of SQLITE_OK, it is successful; otherwise, the other constant SQLITE_ERROR is set to 1. Once the C++ wrapper is used and the access to SQLite commands are available, it is an easier process to use SQLite with iOS. Summary In this article, you read the history of SQL, the impact of relational databases, and the use of a mobile SQL database namely SQLite. It outlines the history and beginnings of SQLite and how it has grown to be the most used database on mobile devices so far. Resources for Article:   Further resources on this subject: Team Project Setup [article] Introducing Sails.js [article] Advanced Fetching [article]
Read more
  • 0
  • 0
  • 31011

article-image-vr-experiences-with-react-vr-create-maze
Sunith Shetty
12 Jun 2018
16 min read
Save for later

Building VR experiences with React VR 2.0: How to create maze that's new every time you play

Sunith Shetty
12 Jun 2018
16 min read
In today’s tutorial, we will examine the functionality required to build a simple maze. There are a few ways we could build a maze. The most straightforward way would be to fire up our 3D modeler package (say, Blender) and create a labyrinth out of polygons. This would work fine and could be very detailed. However, it would also be very boring. Why? The first time we get through the maze will be exciting, but after a few tries, you'll know the way through. When we construct VR experiences, you usually want people to visit often and have fun every time. This tutorial is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you will learn how to create amazing 360 and virtual reality content that runs directly in your browsers. A modeled labyrinth would be boring. Life is too short to do boring things. So, we want to generate a Maze randomly. This way, you can change the Maze every time so that it'll be fresh and different. The way to do that is through random numbers to ensure that the Maze doesn't shift around us, so we want to actually do it with pseudo-random numbers. To start doing that, we'll need a basic application created. Please go to your VR directory and create an application called 'WalkInAMaze': react-vr init WalkInAMaze Almost random–pseudo random number generators To have a chance of replaying value or being able to compare scores between people, we really need a pseudo-random number generator. The basic JavaScript Math.random() is not a pseudo-random generator; it really gives you a totally random number every time. We need a pseudo-random number generator that takes a seed value. If you give the same seed to the random number generator, it will generate the same sequence of random numbers. (They aren't completely random but are very close.) Random number generators are a complex topic; for example, they are used in cryptography, and if your random number generator isn't completely random, someone could break your code. We aren't so worried about that, we just want repeatability. Although the UI for this may be a bit beyond the scope of this book, creating the Maze in a way that clicking on Refresh won't generate a totally different Maze is really a good thing and will avoid frustration on the part of the user. This will also allow two users to compare scores; we could persist a board number for the Maze and show this. This may be out of scope for our book; however, having a predictable Maze will help immensely during development. If it wasn't for this, you might get lost while working on your world. (Well, probably not, but it makes testing easier.) Including library code from other projects Up to this point, I've shown you how to create components in React VR (or React). JavaScript interestingly has a historical issue with include. With C++, Java, or C#, you can include a file in another file or make a reference to a file in a project. After doing that, everything in those other files, such as functions, classes, and global properties (variables), are then usable from the file that you've issued the include statement in. With a browser, the concept of "including" JavaScript is a little different. With Node.js, we use package.json to indicate what packages we need. To bring those packages into our code, we will use the following syntax in your .js files: var MersenneTwister = require('mersenne-twister'); Then, instead of using Math.random(), we will create a new random number generator and pass a seed, as follows: var rng = new MersenneTwister(this.props.Seed); From this point on, you just call rng.random() instead of Math.random(). We can just use npm install <package> and the require statement for properly formatted packages. Much of this can be done for you by executing the npm command: npm install mersenne-twister --save Remember, the --save command to update our manifest in the project. While we are at it, we can install another package we'll need later: npm install react-vr-gaze-button --save Now that we have a good random number generator, let's use it to complicate our world. The Maze render() How do we build a Maze? I wanted to develop some code that dynamically generates the Maze; anyone could model it in a package, but a VR world should be living. Having code that can dynamically build Maze in any size (to a point) will allow a repeat playing of your world. There are a number of JavaScript packages out there for printing mazes. I took one that seemed to be everywhere, in the public domain, on GitHub and modified it for HTML. This app consists of two parts: Maze.html and makeMaze.JS. Neither is React, but it is JavaScript. It works fairly well, although the numbers don't really represent exactly how wide it is. First, I made sure that only one x was displaying, both vertically and horizontally. This will not print well (lines are usually taller than wide), but we are building a virtually real Maze, not a paper Maze. The Maze that we generate with the files at Maze.html (localhost:8081/vr/maze.html) and the JavaScript file—makeMaze.js—will now look like this: x1xxxxxxx x x x xxx x x x x x x x x xxxxx x x x x x x x x x x x x 2 xxxxxxxxx It is a little hard to read, but you can count the squares vs. xs. Don't worry, it's going to look a lot fancier. Now that we have the HTML version of a Maze working, we'll start building the hedges. This is a slightly larger piece of code than I expected, so I broke it into pieces and loaded the Maze object onto GitHub rather than pasting the entire code here, as it's long. You can find a link for the source at: http://bit.ly/VR_Chap11 Adding the floors and type checking One of the things that look odd with a 360 Pano background, as we've talked about before, is that you can seem to "float" against the ground. One fix, other than fixing the original image, is to simply add a floor. This is what we did with the Space Gallery, and it looks pretty good as we were assuming we were floating in space anyway. For this version, let's import a ground square. We could use a large square that would encompass the entire Maze; we'd then have to resize it if the size of the Maze changes. I decided to use a smaller cube and alter it so that it's "underneath" every cell of the Maze. This would allow us some leeway in the future to rotate the squares for worn paths, water traps, or whatever. To make the floor, we will use a simple cube object that I altered slightly and is UV mapped. I used Blender for this. We also import a Hedge model, and a Gem, which will represent where we can teleport to. Inside 'Maze.js' we added the following code: import Hedge from './Hedge.js'; import Floor from './Hedge.js'; import Gem from './Gem.js'; Then, inside the Maze.js we could instantiate our floor with the code: <Floor X={-2} Y={-4}/> Notice that we don't use 'vr/components/Hedge.js' when we do the import; we're inside Maze.js. However, in index.vr.js to include the Maze, we do need: import Maze from './vr/components/Maze.js'; It's slightly more complicated though. In our code, the Maze builds the data structures when props have changed; when moving, if the maze needs rendering again, it simply loops through the data structure and builds a collection (mazeHedges) with all of the floors, teleport targets, and hedges in it. Given this, to create the floors, the line in Maze.js is actually: mazeHedges.push(<Floor {...cellLoc} />); Here is where I ran into two big problems, and I'll show you what happened so that you can avoid these issues. Initially, I was bashing my head against the wall trying to figure out why my floors looked like hedges. This one is pretty easy—we imported Floor from the Hedge.js file. The floors will look like hedges (did you notice this in my preceding code? If so, I did this on purpose as a learning experience. Honest). This is an easy fix. Make sure that you code import Floor from './floor.js'; note that Floor not type-checked. (It is, after all, JavaScript.) I thought this was odd, as the hedge.js file exports a Hedge object, not a Floor object, but be aware you can rename the objects as you import them. The second problem I had was more of a simple goof that is easy to occur if you aren't really thinking in React. You may run into this. JavaScript is a lovely language, but sometimes I miss a strongly typed language. Here is what I did: <Maze SizeX='4' SizeZ='4' CellSpacing='2.1' Seed='7' /> Inside the maze.js file, I had code like this: for (var j = 0; j < this.props.SizeX + 2; j++) { After some debugging, I found out that the value of j was going from 0 to 42. Why did it get 42 instead of 6? The reason was simple. We need to fully understand JavaScript to program complex apps. The mistake was in initializing SizeX to be '4' ; this makes it a string variable. When calculating j from 0 (an integer), React/JavaScript takes 2, adds it to a string of '4', and gets the 42 string, then converts it to an integer and assigns this to j. When this is done, very weird things happened. When we were building the Space Gallery, we could easily use the '5.1' values for the input to the box: <Pedestal MyX='0.0' MyZ='-5.1'/> Then, later use the transform statement below inside the class: transform: [ { translate: [ this.props.MyX, -1.7, this.props.MyZ] } ] React/JavaScript will put the string values into This.Props.MyX, then realize it needs an integer, and then quietly do the conversion. However, when you get more complicated objects, such as our Maze generation, you won't get away with this. Remember that your code isn't "really" JavaScript. It's processed. At the heart, this processing is fairly simple, but the implications can be a killer. Pay attention to what you code. With a loosely typed language such as JavaScript, with React on top, any mistakes you make will be quietly converted to something you didn't intend. You are the programmer. Program correctly. So, back to the Maze. The Hedge and Floor are straightforward copies of the initial Gem code. Let's take a look at our starting Gem, although note it gets a lot more complicated later (and in your source files): import React, { Component } from 'react'; import { asset, Box, Model, Text, View } from 'react-vr'; export default class Gem extends Component { constructor() { super(); this.state = { Height: -3 }; } render() { return ( <Model source={{ gltf2: asset('TeleportGem.gltf'), }} style={{ transform: [{ translate: [this.props.X, this.state.Height, this.props.Z] }] }} /> ); } } The Hedge and Floor are essentially the same thing. (We could have made a prop be the file loaded, but we want a different behavior for the Gem, so we will edit this file extensively.) To run this sample, first, we should have created a directory as you have before, called WalkInAMaze. Once you do this, download the files from the Git source for this part of the article (http://bit.ly/VR_Chap11). Once you've created the app, copied the files, and fired it up, (go to the WalkInAMaze directory and type npm start), and you should see something like this once you look around - except, there is a bug. This is what the maze should look like (if you use the file  'MazeHedges2DoubleSided.gltf' in Hedge.js, in the <Model> statement):> Now, how did we get those neat-looking hedges in the game? (OK, they are pretty low poly, but it is still pushing it.) One of the nice things about the pace of improvement on web standards is their new features. Instead of just .obj file format, React VR now has the capability to load glTF files. Using the glTF file format for models glTF files are a new file format that works pretty naturally with WebGL. There are exporters for many different CAD packages. The reason I like glTF files is that getting a proper export is fairly straightforward. Lightwave OBJ files are an industry standard, but in the case of React, not all of the options are imported. One major one is transparency. The OBJ file format allows that, but at of the time of writing this book, it wasn't an option. Many other graphics shaders that modern hardware can handle can't be described with the OBJ file format. This is why glTF files are the next best alternative for WebVR. It is a modern and evolving format, and work is being done to enhance the capabilities and make a fairly good match between what WebGL can display and what glTF can export. This is however on interacting with the world, so I'll give a brief mention on how to export glTF files and provide the objects, especially the Hedge, as glTF models. The nice thing with glTF from the modeling side is that if you use their material specifications, for example, for Blender, then you don't have to worry that the export won't be quite right. Today's physically Based Rendering (PBR) tends to use the metallic/roughness model, and these import better than trying to figure out how to convert PBR materials into the OBJ file's specular lighting model. Here is the metallic-looking Gem that I'm using as the gaze point: Using the glTF Metallic Roughness model, we can assign the texture maps that programs, such as Substance Designer, calculate and import easily. The resulting figures look metallic where they are supposed to be metallic and dull where the paint still holds on. I didn't use Ambient Occlusion here, as this is a very convex model; something with more surface depressions would look fantastic with Ambient Occlusion. It would also look great with architectural models, for example, furniture. To convert your models, there is user documentation at http://bit.ly/glTFExporting. You will need to download and install the Blender glTF exporter. Or, you can just download the files I have already converted. If you do the export, in brief, you do the following steps: Download the files from http://bit.ly/gLTFFiles. You will need the gltf2_Principled.blend file, assuming that you are on a newer version of Blender. In Blender, open your file, then link to the new materials. Go to File->Link, then choose the gltf2_Principled.blend file. Once you do that, drill into "NodeTree" and choose either glTF Metallic Roughness (for metal), or glTF specular glossiness for other materials. Choose the object you are going to export; make sure that you choose the Cycles renderer. Open the Node Editor in a window. Scroll down to the bottom of the Node Editor window, and make sure that the box Use Nodes is checked. Add the node via the nodal menu, Add->Group->glTF Specular Glossiness or Metallic Roughness. Once the node is added, go to Add->Texture->Image texture. Add as many image textures as you have image maps, then wire them up. You should end up with something similar to this diagram. To export the models, I recommend that you disable camera export and combine the buffers unless you think you will be exporting several models that share geometry or materials. The Export options I used are as follows: Now, to include the exported glTF object, use the <Model> component as you would with an OBJ file, except you have no MTL file. The materials are all described inside the .glTF file. To include the exported glTF object, you just put the filename as a gltf2 prop in the <Model: <Model source={{ gltf2: asset('TeleportGem2.gltf'),}} ... To find out more about these options and processes, you can go to the glTF export web site. This site also includes tutorials on major CAD packages and the all-important glTF shaders (for example, the Blender model I showed earlier). I have loaded several .OBJ files and .glTF files so you can experiment with different combinations of low poly and transparency. When glTF support was added in React VR version 2.0.0, I was very excited as transparency maps are very important for a lot of VR models, especially vegetation; just like our hedges. However, it turns out there is a bug in WebGL or three.js that does not render the transparency properly. As a result, I have gone with a low polygon version in the files on the GitHub site; the pictures, above, were with the file MazeHedges2DoubleSided.gltf in the Hedges.js file (in vr/components). If you get 404 errors, check the paths in the glTF file. It depends on which exporter you use—if you are working with Blender, the gltf2 exporter from the Khronos group calculates the path correctly, but the one from Kupoman has options, and you could export the wrong paths. We discussed important mechanics of props, state, and events. We also discussed how to create a maze using pseudo-random number generators to make sure that our props and state didn't change chaotically. To know more about how to create, move around in, and make worlds react to us in a Virtual Reality world, including basic teleport mechanics, do check out this book Getting Started with React VR.  Read More: Google Daydream powered Lenovo Mirage solo hits the market Google open sources Seurat to bring high precision graphics to Mobile VR Oculus Go, the first stand alone VR headset arrives!
Read more
  • 0
  • 0
  • 30794

article-image-unity-arcore-application-android
Sugandha Lahoti
21 May 2018
11 min read
Save for later

Build an ARCore app with Unity from scratch

Sugandha Lahoti
21 May 2018
11 min read
In this tutorial, we will learn to install, build, and deploy Unity ARCore apps for Android. Unity is a leading cross-platform game engine that is exceptionally easy to use for building game and graphic applications quickly. Unity has developed something of a bad reputation in recent years due to its overuse in poor-quality games. It isn't because Unity can't produce high-quality games, it most certainly can. However, the ability to create games quickly often gets abused by developers seeking to release cheap games for profit. This article is an excerpt from the book, Learn ARCore - Fundamentals of Google ARCore, written by Micheal Lanham. The following is a summary of the topics we will cover in this article: Installing Unity and ARCore Building and deploying to Android Remote debugging Exploring the code Installing Unity and ARCore Installing the Unity editor is relatively straightforward. However, the version of Unity we will be using may still be in beta. Therefore, it is important that you pay special attention to the following instructions when installing Unity: Navigate a web browser to https://unity3d.com/unity/beta. At the time of writing, we will use the most recent beta version of Unity since ARCore is also still in beta preview. Be sure to note the version you are downloading and installing. This will help in the event you have issues working with ARCore. Click on the Download installer button. This will download UnityDownloadAssistant. Launch UnityDownloadAssistant. Click on Next and then agree to the Terms of Service. Click on Next again. Select the components, as shown: Install Unity in a folder that identifies the version, as follows: Click on Next to download and install Unity. This can take a while, so get up, move around, and grab a beverage. Click on the Finish button and ensure that Unity is set to launch automatically. Let Unity launch and leave the window open. We will get back to it shortly. Once Unity is installed, we want to download the ARCore SDK for Unity. This will be easy now that we have Git installed. Follow the given instructions to install the SDK: Open a shell or Command Prompt. Navigate to your Android folder. On Windows, use this: cd C:Android Type and execute the following: git clone https://github.com/google-ar/arcore-unity-sdk.git After the git command completes, you will see a new folder called arcore-unity-sdk. If this is your first time using Unity, you will need to go online to https://unity3d.com/ and create a Unity user account. The Unity editor will require that you log in on first use and from time to time. Now that we have Unity and ARCore installed, it's time to open the sample project by implementing the following steps: If you closed the Unity window, launch the Unity editor. The path on Windows will be C:Unity 2017.3.0b8EditorUnity.exe. Feel free to create a shortcut with the version number in order to make it easier to launch the specific Unity version later. Switch to the Unity project window and click on the Open button. Select the Android/arcore-unity-sdk folder. This is the folder we used the git command to install the SDK to earlier, as shown in the following dialog: Click on the Select Folder button. This will launch the editor and load the project. Open the Assets/GoogleARCore/HelloARExample/Scenes folder in the Project window, as shown in the following excerpt: Double-click on the HelloAR scene, as shown in the Project window and in the preceding screenshot. This will load our AR scene into Unity. At any point, if you see red console or error messages in the bottom status bar, this likely means you have a version conflict. You will likely need to install a different version of Unity. Now that we have Unity and ARCore installed, we will build the project and deploy the app to an Android device in the next section. Building and deploying to Android With most Unity development, we could just run our scene in the editor for testing. Unfortunately, when developing ARCore applications, we need to deploy the app to a device for testing. Fortunately, the project we are opening should already be configured for the most part. So, let's get started by following the steps in the next exercise: Open up the Unity editor to the sample ARCore project and open the HelloAR scene. If you left Unity open from the last exercise, just ignore this step. Connect your device via USB. From the menu, select File | Build Settings. Confirm that the settings match the following dialog: Confirm that the HelloAR scene is added to the build. If the scene is missing, click on the Add Open Scenes button to add it. Click on Build and Run. Be patient, first-time builds can take a while. After the app gets pushed to the device, feel free to test it, as you did with the Android version. Great! Now we have a Unity version of the sample ARCore project running. In the next section, we will look at remotely debugging our app. Remote debugging Having to connect a USB all the time to push an app is inconvenient. Not to mention that, if we wanted to do any debugging, we would need to maintain a physical USB connection to our development machine at all times. Fortunately, there is a way to connect our Android device via Wi-Fi to our development machine. Use the following steps to establish a Wi-Fi connection: Ensure that a device is connected via USB. Open Command Prompt or shell. On Windows, we will add C:Androidsdkplatform-tools to the path just for the prompt we are working on. It is recommended that you add this path to your environment variables. Google it if you are unsure of what this means. Enter the following commands: //WINDOWS ONLY path C:Androidsdkplatform-tools //FOR ALL adb devices adb tcpip 5555 If it worked, you will see restarting in TCP mode port: 5555. If you encounter an error, disconnect and reconnect the device. Disconnect your device. Locate the IP address of your device by doing as follows: Open your phone and go to Settings and then About phone. Tap on Status. Note down the IP address. Go back to your shell or Command Prompt and enter the following: adb connect [IP Address] Ensure that you use the IP Address you wrote down from your device. You should see connected to [IP Address]:5555. If you encounter a problem, just run through the steps again. Testing the connection Now that we have a remote connection to our device, we should test it to ensure that it works. Let's test our connection by doing the following: Open up Unity to the sample AR project. Expand the Canvas object in the Hierarchy window until you see the SearchingText object and select it, just as shown in the following excerpt: Hierarchy window showing the selected SearchingText object Direct your attention to the Inspector window, on the right-hand side by default. Scroll down in the window until you see the text "Searching for surfaces…". Modify the text to read "Searching for ARCore surfaces…", just as we did in the last chapter for Android. From the menu, select File | Build and Run. Open your device and test your app. Remotely debugging a running app Now, building and pushing an app to your device this way will take longer, but it is far more convenient. Next, let's look at how we can debug a running app remotely by performing the following steps: Go back to your shell or Command Prompt. Enter the following command: adb logcat You will see a stream of logs covering the screen, which is not something very useful. Enter Ctrl + C (command + C on Mac) to kill the process. Enter the following command: //ON WINDOWS C:Androidsdktoolsmonitor.bat //ON LINUX/MAC cd android-sdk/tools/ monitor This will open Android Device Monitor. You should see your device on the list to the left. Ensure that you select it. You will see the log output start streaming in the LogCat window. Drag the LogCat window so that it is a tab in the main window, as illustrated: Android Device Monitor showing the LogCat window Leave the Android Device Monitor window open and running. We will come back to it later. Now we can build, deploy, and debug remotely. This will give us plenty of flexibility later when we want to become more mobile. Of course, the remote connection we put in place with adb will also work with Android Studio. Yet, we still are not actually tracking any log output. We will output some log messages in the next section. Exploring the code Unlike Android, we were able to easily modify our Unity app right in the editor without writing code. In fact, given the right Unity extensions, you can make a working game in Unity without any code. However, for us, we want to get into the nitty-gritty details of ARCore, and that will require writing some code. Jump back to the Unity editor, and let's look at how we can modify some code by implementing the following exercise: From the Hierarchy window, select the ExampleController object. This will pull up the object in the Inspector window. Select the Gear icon beside Hello AR Controller (Script) and from the context menu, select Edit Script, as in the following excerpt: This will open your script editor and load the script, by default, MonoDevelop. Unity supports a number of Integrated Development Environments (IDEs) for writing C# scripts. Some popular options are Visual Studio 2015-2017 (Windows), VS Code (All), JetBrains Rider (Mac), and even Notepad++(All). Do yourself a favor and try one of the options listed for your OS.   Scroll down in the script until you see the following block of code: public void Update () { _QuitOnConnectionErrors(); After the _QuitOnConnectionErrors(); line of code, add the following code: Debug.Log("Unity Update Method"); Save the file and then go back to Unity. Unity will automatically recompile the file. If you made any errors, you will see red error messages in the status bar or console. From the menu, select File | Build and Run. As long as your device is still connected via TCP/IP, this will work. If your connection broke, just go back to the previous section and reset it. Run the app on the device. Direct your attention to Android Device Monitor and see whether you can spot those log messages. Unity Update method The Unity Update method is a special method that runs before/during a frame update or render. For your typical game running at 60 frames per second, this means that the Update method will be called 60 times per second as well, so you should be seeing lots of messages tagged as Unity. You can filter these messages by doing the following: Jump to the Android Device Monitor window. Click on the green plus button in the Saved Filters panel, as shown in the following excerpt: Adding a new tag filter Create a new filter by entering a Filter Name (use Unity) and by Log Tag (use Unity), as shown in the preceding screenshot. Click on OK to add the filter. Select the new Unity filter. You will now see a list of filtered messages specific to Unity platform when the app is running on the device. If you are not seeing any messages, check your connection and try to rebuild. Ensure that you saved your edited code file in MonoDevelop as well. Good job. We now have a working Unity set up with remote build and debug support. In this post,  we installed Unity and the ARCore SDK for Unity. We then took a slight diversion by setting up a remote build and debug connection to our device using TCP/IP over Wi-Fi. Next, we tested out our ability to modify the C# script in Unity by adding some debug log output. Finally, we tested our code changes using the Android Device Monitor tool to filter and track log messages from the Unity app deployed to the device. To know how to setup web development with JavaScript in ARCore and look through the various sample ARCore templates, check out the book Learn ARCore - Fundamentals of Google ARCore. Getting started with building an ARCore application for Android Unity plugins for augmented reality application development Types of Augmented Reality targets
Read more
  • 0
  • 0
  • 30770
article-image-augmented-reality
Packt
22 Nov 2013
6 min read
Save for later

Augmented Reality

Packt
22 Nov 2013
6 min read
(For more resources related to this topic, see here.) A quick overview of AR concepts As AR has become increasingly popular in the media over the last few years, unfortunately, several distorted notions of Augmented Reality have evolved. Anything that is somehow related to the real world and involves some computing, such as standing in front of a shop and watching 3D models wear the latest fashions, has become AR. Augmented Reality emerged from research labs a few decades ago and different definitions of AR have been produced. As more and more research fields (for example, computer vision, computer graphics, human-computer interaction, medicine, humanities, and art) have investigated AR as a technology, application, or concept, multiple overlapping definitions now exist for AR. Rather than providing you with an exhaustive list of definitions, we will present some major concepts present in any AR application. Sensory augmentation The term Augmented Reality itself contains the notion of reality. Augmenting generally refers to the aspect of influencing one of your human sensory systems, such as vision or hearing, with additional information. This information is generally defined as digital or virtual and will be produced by a computer. The technology currently uses displays to overlay and merge the physical information with the digital information. To augment your hearing, modified headphones or earphones equipped with microphones are able to mix sound from your surroundings in realtime with sound generated by your computer. Displays The TV screen at home is the ideal device to perceive virtual content, streamed from broadcasts or played from your DVD. Unfortunately, most common TV screens are not able to capture the real world and augment it. An Augmented Reality display needs to simultaneously show the real and virtual worlds. One of the first display technologies for AR was produced by Ivan Sutherlandin 1964 (named "The Sword of Damocles"). The system was rigidly mounted on the ceiling and used some CRT screens and a transparent display to be able to create the sensation of visually merging the real and virtual. Since then, we have seen different trends in AR display, going from static to wearable and handheld displays. One of the major trends is the usage of optical see-through (OST) technology. The idea is to still see the real world through a semitransparent screen and project some virtual content on the screen. The merging of the real and virtual worlds does not happen on the computer screen, but directly on the retina of your eye, as depicted in the following figure: The other major trend in AR display is what we call video see-through (VST) technology. You can imagine perceiving the world not directly, but through a video on a monitor. The video image is mixed with some virtual content (as you will see in a movie) and sent back to some standard display, such as your desktop screen, your mobile phone, or the upcoming generation of head-mounted displays as shown in the following figure: In this book, we will work on Android-driven mobile phones and, therefore, discuss only VST systems; the video camera used will be the one on the back of your phone. Registration in 3D With a display (OST or VST) in your hands, you are already able to superimpose things from your real world, as you will see in TV advertisements with text banners at the bottom of the screen. However, any virtual content (such as text or images will remain fixed in its position on the screen. The superposition being really static, your AR display will act as a head-up display (HUD), but won't really be an AR as shown in the following figure: Google Glass is an example of an HUD. While it uses a semitransparent screen like an OST, the digital content remains in a static position. AR needs to know more about real and virtual content. It needs to know where things are in space (registration) and follow where they are moving (tracking). Registration is basically the idea of aligning virtual and real content in the same space. If you are into movies or sports, you will notice that 2D or 3D graphics are superimposed onto scenes of the physical world quite often. In ice hockey, the puck is often highlighted with a colored trail. In movies such as Walt Disney'sTRON (1982 version), the real and virtual elements are seamlessly blended. However, AR differs from those effects as it is based on all of the following aspects (proposed by Ronald T. Azumain 1997): It's in 3D: In the olden days, some of the movies were edited manually to merge virtual visual effects with real content. A well-known example is Star Wars, where all the lightsaber effects have been painted by hand by hundreds of artists and, thus, frame by frame. Nowadays, more complex techniques support merging digital 3D content (such as characters or cars) with the video image (and is called match moving). AR is inherently always doing that in a 3D space. The registration happens in real time: In a movie, everything is prerecorded and generated in a studio; you just play the media. In AR, everything is in real time, so your application needs to merge, at each instance, reality and virtuality. It's interactive: In a movie, you only look passively at the scene from where it has been shot. In AR, you can actively move around, forward, and backward and turn your AR display—you will still see an alignment between both worlds. Interaction with the environment Building a rich AR application needs interaction between environments; otherwise you end up with pretty, 3D graphics that can turn boring quite fast. AR interaction refers to selecting and manipulating digital and physical objects and navigating in the augmented scene. Rich AR applications allow you to use objects which can be on your table, to move some virtual characters, use your hands to select some floating virtual objects while walking on the street, or speak to a virtual agent appearing on your watch to arrange a meeting later in the day. We will look at how some of the standard mobile interaction techniques can also be applied to AR. We will also dig into specific techniques involving the manipulation of the real world. Summary Thus we have learned about the AR concepts through this article. Resources for Article: Further resources on this subject: Marker-based Augmented Reality on iPhone or iPad [Article] Creating Dynamic UI with Android Fragments [Article] Introducing an Android platform [Article]
Read more
  • 0
  • 0
  • 30198

article-image-play-functions
Packt
21 Feb 2018
6 min read
Save for later

Play With Functions

Packt
21 Feb 2018
6 min read
This article by Igor Wojda and Marcin Moskala, authors of the book Android Development with Kotlin, introduces functions in Kotlin, together with different ways of calling functions. (For more resources related to this topic, see here.) Single-expression functions During typical programming, many functions contain only one expression. Here is example of this kind of function: fun square(x: Int): Int { return x * x } Or another one, which can be often found in Android projects. It is pattern used in Activity, to define methods that are just getting text from some view or providing some other data from view to allow presenter to get them: fun getEmail(): String { return emailView.text.toString() } Both functions are defined to return result of single expression. In first example it is result of x * x multiplication, and in second one it is result of expression emailView.text.toString(). This kind of functions are used all around Android projects. Here are some common use-cases: Extracting some small operations Using polymorphism to provide values specific to class Functions that are only creating some object Functions that are passing data between architecture layers (like in preceding example Activity is passing data from view to presenter) Functional programming style functions that base on recurrence Such functions are often used, so Kotlin has notation for this kind of them. When a function returns a single expression, then curly braces and body of the function can be omitted. We specify expression directly using equality character. Functions defined this way are called single-expression functions. Let's update our square function, and define it as a single-expression function: As we can see, single expression function have expression body instead of block body. This notation is shorter, but whole body needs to be just a single expression. In single-expression functions, declaring return type is optional, because it can be inferred by the compiler from type of expression. This is why we can simplify use square function, and define it this way: fun square(x: Int) = x * x There are many places inside Android application where we can utilize single expression functions. Let's consider RecyclerView adapter that is providing layout ID and creating ViewHolder: class AddressAdapter : ItemAdapter<AddressAdapter.ViewHolder>() { override fun getLayoutId() = R.layout.choose_address_view override fun onCreateViewHolder(itemView: View) = ViewHolder(itemView) // Rest of methods } In the following example, we achieve high readability thanks to single expression function. Single-expression functions are also very popular in the functional world. Single expression function notation is also well-pairing with when structure. Here example of their connection, used to get specific data from object according to key (use-case from big Kotlin project): fun valueFromBooking(key: String, booking: Booking?) = when(key) { // 1 "patient.nin" -> booking?.patient?.nin "patient.email" -> booking?.patient?.email "patient.phone" -> booking?.patient?.phone "comment" -> booking?.comment else -> null } We don't need a type, because it is inferred from when expression. Another common Android example is that we can combine when expression with activity method onOptionsItemSelected that handles top bar menu clicks: override fun onOptionsItemSelected(item: MenuItem): Boolean = when { item.itemId == android.R.id.home -> { onBackPressed() true } else -> super.onOptionsItemSelected(item) } As we can see, single expression functions can make our code more concise and improved readability. Single-expression functions are commonly used in Kotlin Android projects and they are really popular for functional programming. As an example. Let's suppose that we need to filter all the odd values from following list: val list = listOf(1, 2, 3, 4, 5) We will use following helper function that returns true if argument is odd otherwise it returns false: fun isOdd(i: Int) = i % 2 == 1 In imperative programming style, we should specify steps of processing, which are connected to execution process (iterate through list, check if value is odd, add value to one list if it's odd). Here is implementation of this functionality, that is typical for imperative style: var oddList = emptyList<Int>() for(i in list) { if(isOdd(i)) { newList += i } } In declarative programming style, the way of thinking about code is different - we should think what is the required result and simply use functions that will give us this result. Kotlin stdlib provides lot of functions supporting declarative programming style. Here is how we could implement the same functionality using one of them, called filter: var oddList = list.filter(::isOdd) filter is function that leaves only elements that are true according to predicate. Here function isOdd is used as an predicate. Different ways of calling a function Sometimes we need to call function and provide only selected arguments. In Java we could create multiple overloads of the same method, but this solution have there are some limitations. First problem is that number of possible method permutations is growing very quickly (2n) making them very difficult to maintain. Second problem is that overloads must be distinguishable from each other, so compiler will know which overload to call, so when method defines few parameters with the same type we can't define all possible overloads. That's why in Java we often need to pass multiple null values to a method: // Java printValue("abc", null, null, "!"); Multiple null parameters provide boilerplate and greatly decrease method readability. In Kotlin there is no such problem, because Kotlin has feature called default arguments and named argument syntax. Default arguments values Default arguments are mostly known from C++, which is one of the oldest languages supporting it. Default argument provides a value for a parameter in case it is not provided during method call. Each function parameter can have default value. It might be any value that is matching specified type including null. This way we can simply define functions that can be called in multiple ways We can use this function the same way as normal function (function without default argument values) by providing values for each parameter (all arguments): printValue("str", true, "","") // Prints: (str) Thanks to default argument values, we can call a function by providing arguments only for parameters without default values: printValue("str") // Prints: (str) We can also provide all parameters without default values, and only some that have a default value: printValue("str", false) // Prints: str Named arguments syntax Sometimes we want only to pass value for last argument. Let's suppose that we define want to define value for suffix, but not for prefix and inBracket (which are defined before suffix). Normally we would have to provide values for all previous parameters including the default parameter values: printValue("str", true, true, "!") // Prints: (str) By using named argument syntax, we can pass specific argument using argument name: printValue("str", suffix = "!") // Prints: (str)! We can also use named argument syntax together with classic call. The only restriction is when we start using named syntax we cannot use classic one for next arguments we are serving: printValue("str", true, "") printValue("str", true, prefix = "") printValue("str", inBracket = true, prefix = "") Summary In this article, we learned about single expression functions as a type of defining functions in application development. We also briefly explained Resources for Article:   Further resources on this subject: Getting started with Android Development [article] Android Game Development with Unity3D [article] Kotlin Basics [article]
Read more
  • 0
  • 0
  • 30011
Modal Close icon
Modal Close icon