Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-k-means-clustering-python
Aaron Lazar
09 Nov 2017
9 min read
Save for later

Implementing K-Means Clustering in Python

Aaron Lazar
09 Nov 2017
9 min read
This article is an adaptation of content from the book Data Science Algorithms in a Week, by David Natingga. I’ve modified it a bit and made turned it into a sequence from a thriller, starring Agents Hobbs and O’Connor, from the FBI. The idea is to practically show you how to implement a k-means cluster in your friendly neighborhood language, Python. Agent Hobbs: Agent… Agent O’Connor… O’Connor! Agent O’Connor: Blimey! Uh.. Ohh.. Sorry, sir! Hobbs: ‘Tat’s abou’ the fifth time oive’ caught you sleeping on duty, young man! O’Connor: Umm. Apologies, sir. I just arrived here, and didn’t have much to… Hobbs: Cut the bull, agent! There’s an important case oime workin’ on and oi’ need information on this righ’ awai’! Here’s the list of missing persons kidnapped so far by the suspects. The suspects now taunt us with open clues abou’ their next target! Based on the information, we’ve narrowed their target list down to Miss. Gibbons and Mr. Hudson. Hobbs throws a file across O’Connor’s desk. Hobbs says as he storms out the door: You ‘ave an hour to find out who needs the special security, so better get working. O’Connor: Yes, sir! Bloody hell, that was close! Here’s the information O’Connor has: He needs to find what the probability is, that the 11th person with a height of 172cm, weight of 60kg, and with long hair is a man. O’Connor gets to work. To simplify matters, he removes the column Hair length as well as the column Gender, since he would like to cluster the people in the table based on their height and weight. To find out whether the 11th person in the table is more likely to be a man or a woman, he uses Clustering: Analysis O’Connor may apply scaling to the initial data, but to simplify the matters, he uses the unscaled data in the algorithm. He clusters the data into the two clusters since there are two possibilities for genders – a male or a female. Then he aims to classify a person with the height 172cm and weight 60kg to be more likely a man if and only if there are more men in that cluster. The clustering algorithm is a very efficient technique. Thus classifying this way is very fast, especially if there are a large number of the features to classify. So he goes on to apply the k-means clustering algorithm to the data he has. First, he picks up the initial centroids. He assumes the first centroid be, for example, a person with the height 180cm and the weight 75kg denoted in a vector as (180,75). Then the point that is furthest away from (180,75) is (155,46). So that will be the second centroid. The points that are closer to the first centroid (180,75) by taking Euclidean distance are (180,75), (174,71), (184,83), (168,63), (178,70), (170,59), (172,60). So these points will be in the first cluster. The points that are closer to the second centroid (155,46) are (155,46), (164,53), (162,52), (166,55). So these points will be in the second cluster. He displays the current situation of these two clusters in the image as below. Clustering of people by their height and weight He then recomputes the centroids of the clusters. The blue cluster with the features (180,75), (174,71), (184,83), (168,63), (178,70), (170,59), (172,60) will have the centroid ((180+174+184+168+178+170+172)/7 (75+71+83+63+70+59+60)/7)~(175.14,68.71). The red cluster with the features (155,46), (164,53), (162,52), (166,55) will have the centroid ((155+164+162+166)/4, (46+53+52+55)/4) = (161.75, 51.5). Reclassifying the points using the new centroid, the classes of the points do not change. The blue cluster will have the points (180,75), (174,71), (184,83), (168,63), (178,70), (170,59), (172,60). The red cluster will have the points (155,46), (164,53), (162,52), (166,55). Therefore the clustering algorithm terminates with clusters as displayed in the following image: Clustering of people by their height and weight Now he classifies the instance (172,60) as to whether it is a male or a female. The instance (172,60) is in the blue cluster. So it is similar to the features in the blue cluster. Are the remaining features in the blue cluster more likely males or females? 5 out of 6 features are males, only 1 is a female. Since the majority of the features are males in the blue cluster and the person (172,60) is in the blue cluster as well, he classifies the person with the height 172cm and the weight 60kg as a male. Implementing K-Means clustering in Python O’Connor implements the k-means clustering algorithm in Python. It takes as an input a CSV file with one data item per line. A data item is converted to a point. The algorithm classifies these points into the specified number of clusters. In the end, the clusters are visualized on the graph using the matplotlib library: # source_code/5/k-means_clustering.py import math import imp import sys import matplotlib.pyplot as plt import matplotlib import sys sys.path.append('../common') import common # noqa matplotlib.style.use('ggplot') # Returns k initial centroids for the given points. def choose_init_centroids(points, k): centroids = [] centroids.append(points[0]) while len(centroids) < k: # Find the centroid that with the greatest possible distance # to the closest already chosen centroid. candidate = points[0] candidate_dist = min_dist(points[0], centroids) for point in points: dist = min_dist(point, centroids) if dist > candidate_dist: candidate = point candidate_dist = dist centroids.append(candidate) return centroids # Returns the distance of a point from the closest point in points. def min_dist(point, points): min_dist = euclidean_dist(point, points[0]) for point2 in points: dist = euclidean_dist(point, point2) if dist < min_dist: min_dist = dist return min_dist # Returns an Euclidean distance of two 2-dimensional points. def euclidean_dist((x1, y1), (x2, y2)): return math.sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2)) # PointGroup is a tuple that contains in the first coordinate a 2d point # and in the second coordinate a group which a point is classified to. def choose_centroids(point_groups, k): centroid_xs = [0] * k centroid_ys = [0] * k group_counts = [0] * k for ((x, y), group) in point_groups: centroid_xs[group] += x centroid_ys[group] += y group_counts[group] += 1 centroids = [] for group in range(0, k): centroids.append(( float(centroid_xs[group]) / group_counts[group], float(centroid_ys[group]) / group_counts[group])) return centroids # Returns the number of the centroid which is closest to the point. # This number of the centroid is the number of the group where # the point belongs to. def closest_group(point, centroids): selected_group = 0 selected_dist = euclidean_dist(point, centroids[0]) for i in range(1, len(centroids)): dist = euclidean_dist(point, centroids[i]) if dist < selected_dist: selected_group = i selected_dist = dist return selected_group # Reassigns the groups to the points according to which centroid # a point is closest to. def assign_groups(point_groups, centroids): new_point_groups = [] for (point, group) in point_groups: new_point_groups.append( (point, closest_group(point, centroids))) return new_point_groups # Returns a list of pointgroups given a list of points. def points_to_point_groups(points): point_groups = [] for point in points: point_groups.append((point, 0)) return point_groups # Clusters points into the k groups adding every stage # of the algorithm to the history which is returned. def cluster_with_history(points, k): history = [] centroids = choose_init_centroids(points, k) point_groups = points_to_point_groups(points) while True: point_groups = assign_groups(point_groups, centroids) history.append((point_groups, centroids)) new_centroids = choose_centroids(point_groups, k) done = True for i in range(0, len(centroids)): if centroids[i] != new_centroids[i]: done = False Break if done: return history centroids = new_centroids # Program start csv_file = sys.argv[1] k = int(sys.argv[2]) everything = False # The third argument sys.argv[3] represents the number of the step of the # algorithm starting from 0 to be shown or "last" for displaying the last # step and the number of the steps. if sys.argv[3] == "last": everything = True Else: step = int(sys.argv[3]) data = common.csv_file_to_list(csv_file) points = data_to_points(data) # Represent every data item by a point. history = cluster_with_history(points, k) if everything: print "The total number of steps:", len(history) print "The history of the algorithm:" (point_groups, centroids) = history[len(history) - 1] # Print all the history. print_cluster_history(history) # But display the situation graphically at the last step only. draw(point_groups, centroids) else: (point_groups, centroids) = history[step] print "Data for the step number", step, ":" print point_groups, centroids draw(point_groups, centroids) Input data from gender classification He saves data from the classification into the CSV file: # source_code/5/persons_by_height_and_weight.csv 180,75 174,71 184,83 168,63 178,70 170,59 164,53 155,46 162,52 166,55 172,60 Program output for the classification data O’Connor runs the program implementing k-means clustering algorithm on the data from the classification. The numerical argument 2 means that he would like to cluster the data into 2 clusters: $ python k-means_clustering.py persons_by_height_weight.csv 2 last The total number of steps: 2 The history of the algorithm: Step number 0: point_groups = [((180.0, 75.0), 0), ((174.0, 71.0), 0), ((184.0, 83.0), 0), ((168.0, 63.0), 0), ((178.0, 70.0), 0), ((170.0, 59.0), 0), ((164.0, 53.0), 1), ((155.0, 46.0), 1), ((162.0, 52.0), 1), ((166.0, 55.0), 1), ((172.0, 60.0), 0)] centroids = [(180.0, 75.0), (155.0, 46.0)] Step number 1: point_groups = [((180.0, 75.0), 0), ((174.0, 71.0), 0), ((184.0, 83.0), 0), ((168.0, 63.0), 0), ((178.0, 70.0), 0), ((170.0, 59.0), 0), ((164.0, 53.0), 1), ((155.0, 46.0), 1), ((162.0, 52.0), 1), ((166.0, 55.0), 1), ((172.0, 60.0), 0)] centroids = [(175.14285714285714, 68.71428571428571), (161.75, 51.5)] The program also outputs a graph visible in the 2nd image. The parameter last means that O’Connor would like the program to do the clustering until the last step. If he wants to display only the first step (step 0), he can change last to 0 to run: $ python k-means_clustering.py persons_by_height_weight.csv 2 0 Upon the execution of the program, O’Connor gets the graph of the clusters and their centroids at the initial step, as in image 1. He heaves a sigh of relief. Hobbs returns just then: Oye there O’Connor, not snoozing again now O’are ya? O’Connor: Not at all, sir. I think we need to provide Mr. Hudson with special protection because it looks like he’s the next target. Hobbs raises an eyebrow as he adjusts his gun in it’s holster: Emm, O’are ya sure, agent? O’Connor replies with a smile: 83.33% confident, sir! Hobbs: Wha’ are we waiting for then, eh? Let’s go get em! If you liked reading this mystery, go ahead and buy the book it was inspired by: Data Science Algorithms in a Week, by David Natingga.
Read more
  • 0
  • 1
  • 24066

article-image-installing-a-blockchain-network-using-hyperledger-fabric-and-composertutorial
Savia Lobo
01 Apr 2019
6 min read
Save for later

Installing a blockchain network using Hyperledger Fabric and Composer[Tutorial]

Savia Lobo
01 Apr 2019
6 min read
This article is an excerpt taken from the book Hands-On IoT Solutions with Blockchain written by Maximiliano Santos and Enio Moura. In this book, you'll learn how to work with problem statements and learn how to design your solution architecture so that you can create your own integrated Blockchain and IoT solution. In this article, you will learn how to install your own blockchain network using Hyperledger Fabric and Composer. We can install the blockchain network using Hyperledger Fabric by many means, including local servers, Kubernetes, IBM Cloud, and Docker. To begin with, we'll explore Docker and Kubernetes. Setting up Docker Docker can be installed using information provided on https://www.docker.com/get-started. Hyperledger Composer works with two versions of Docker: Docker Composer version 1.8 or higher Docker Engine version 17.03 or higher If you already have Docker installed but you're not sure about the version, you can find out what the version is by using the following command in the terminal or command prompt: docker –version Be careful: many Linux-based operating systems, such as Ubuntu, come with the most recent version of Python (Python 3.5.1). In this case, it's important to get Python version 2.7. You can get it here: https://www.python.org/download/releases/2.7/. Installing Hyperledger Composer We're now going to set up Hyperledger Composer and gain access to its development tools, which are mainly used to create business networks. We'll also set up Hyperledger Fabric, which can be used to run or deploy business networks locally. These business networks can be run on Hyperledger Fabric runtimes in some alternative places as well, for example, on a cloud platform. Make sure that you've not installed the tools and used them before. If you have, you'll them using this guide. Components To successfully install Hyperledger Composer, you'll need these components ready: CLI Tools Playground Hyperledger Fabric An IDE Once these are set up, you can begin with the steps given here. Step 1 – Setting up CLI Tools CLI Tools, composer-cli, is a library with the most important operations, such as administrative, operational, and developmental tasks. We'll also install the following tools during this step: Yeoman: Frontend tool for generating applications Library generator: For generating application assets REST server: Utility for running a REST server (local) Let's start our setup of CLI Tools:  Install CLI Tools: npm install -g composer-cli@0.20 Install the library generator: npm install -g generator-hyperledger-composer@0.20 Install the REST server: npm install -g composer-rest-server@0.20 This will allow for integration with a local REST server to expose your business networks as RESTful APIs. Install Yeoman: npm install -g yo Don't use the su or sudo commands with npm to ensure that the current user has all permissions necessary to run the environment by itself. Step 2 – Setting up Playground Playground can give you a UI in your local machine if using your browser to run Playground. This will allow you to display your business networks, browse apps to test edit, and test your business networks. Use the following command to install Playground: npm install -g composer-playground@0.20 Now we can run Hyperledger Fabric. Step 3 – Hyperledger Fabric This step will allow you to run a Hyperledger Fabric runtime locally and deploy your business networks: Choose a directory, such as ~/fabric-dev-servers. Now get the .tar.gz file, which contains the tools for installing Hyperledger Fabric: mkdir ~/fabric-dev-servers && cd ~/fabric-dev-servers curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.tar.gz tar -xvf fabric-dev-servers.tar.gz You've downloaded some scripts that will allow the installation of a local Hyperledger Fabric v1.2 runtime. To download the actual environment Docker images, run the following commands in your user home directory: cd ~/fabric-dev-servers export FABRIC_VERSION=hlfv12 ./downloadFabric.sh Well done! Now you have everything required for a typical developer environment. Step 4 – IDE Hyperledger Composer allows you to work with many IDEs. Two well-known ones are Atom and VS Code, which both have good extensions for working with Hyperledger Composer. Atom lets you use the composer-atom plugin (https://github.com/hyperledger/composer-atom-plugin) for syntax highlighting of the Hyperledger Composer Modeling Language. You can download this IDE at the following link: https://atom.io/. Also, you can download VS Code at the following link: https://code.visualstudio.com/download. Installing Hyperledger Fabric 1.3 using Docker There are many ways to download the Hyperledger Fabric platform; Docker is the most used method. You can use an official repository. If you're using Windows, you'll want to use the Docker Quickstart Terminal for the upcoming terminal commands. If you're using Docker for Windows, follow these instructions: Consult the Docker documentation for shared drives, which can be found at https://docs.docker.com/docker-for-windows/#shared-drives, and use a location under one of the shared drives. Create a directory where the sample files will be cloned from the Hyperledger GitHub repository, and run the following commands: $ git clone -b master https://github.com/hyperledger/fabric-samples.git To download and install Hyperledger Fabric on your local machine, you have to download the platform-specific binaries by running the following command: $ curl -sSl https://goo.gl/6wtTN5 | bash -s 1.1.0 The complete installation guide can be found on the Hyperledger site. Deploying Hyperledger Fabric 1.3 to a Kubernetes environment This step is recommended for those of you who have the experience and skills to work with Kubernetes, a cloud environment, and networks and would like an in-depth exploration of Hyperledger Fabric 1.3. Kubernetes is a container orchestration platform and is available on major cloud providers such as Amazon Web Services, Google Cloud Platform, IBM, and Azure. Marcelo Feitoza Parisi, one of IBM's brilliant cloud architects, has created and published a guide on GitHub on how to set up a Hyperledger Fabric production-level environment on Kubernetes. The guide is available at https://github.com/feitnomore/hyperledger-fabric-kubernetes. If you've enjoyed reading this post, head over to the book, Hands-On IoT Solutions with Blockchain to understand how IoT and blockchain technology can help to solve the modern food chain and their current challenges. IBM announces the launch of Blockchain World Wire, a global blockchain network for cross-border payments Google expands its Blockchain search tools, adds six new cryptocurrencies in BigQuery Public Datasets Blockchain governance and uses beyond finance – Carnegie Mellon university podcast
Read more
  • 0
  • 0
  • 24052

article-image-intelligent-mobile-projects-with-tensorflow-build-a-basic-raspberry-pi-robot-that-listens-moves-sees-and-speaks-tutorial
Bhagyashree R
27 Aug 2018
14 min read
Save for later

Intelligent mobile projects with TensorFlow: Build a basic Raspberry Pi robot that listens, moves, sees, and speaks [Tutorial]

Bhagyashree R
27 Aug 2018
14 min read
According to Wikipedia, "The Raspberry Pi is a series of small single-board computers developed in the United Kingdom by the Raspberry Pi Foundation to promote the teaching of basic computer science in schools and in developing countries." The official site of Raspberry Pi describes it as "a small and affordable computer that you can use to learn programming." If you have never heard of or used Raspberry Pi before, just go its website and chances are you'll quickly fall in love with the cool little thing. Little yet powerful—in fact, developers of TensorFlow made TensorFlow available on Raspberry Pi from early versions around mid-2016, so we can run complicated TensorFlow models on the tiny computer that you can buy for about $35. In this article we will see how to set up TensorFlow on Raspberry Pi and use the TensorFlow image recognition and audio recognition models, along with text to speech and robot movement APIs, to build a Raspberry Pi robot that can move, see, listen, and speak. This tutorial is an excerpt from a book written by Jeff Tang titled Intelligent Mobile Projects with TensorFlow. Setting up TensorFlow on Raspberry Pi To use TensorFlow in Python, we can install the TensorFlow 1.6 nightly build for Pi at the TensorFlow Jenkins continuous integrate site (http://ci.tensorflow.org/view/Nightly/job/nightly-pi/223/artifact/output-artifacts): sudo pip install http://ci.tensorflow.org/view/Nightly/job/nightly-pi/lastSuccessfulBuild/artifact/output-artifacts/tensorflow-1.6.0-cp27-none-any.whl This method is quite common. A more complicated method is to use the makefile, required when you need to build and use the TensorFlow library. The Raspberry Pi section of the official TensorFlow makefile documentation (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/makefile) has detailed steps to build the TensorFlow library, but it may not work with every release of TensorFlow. The steps there work perfectly with an earlier version of TensorFlow (0.10), but would cause many "undefined reference to google::protobuf" errors with the TensorFlow 1.6. The following steps have been tested with the TensorFlow 1.6 release, downloadable at https://github.com/tensorflow/tensorflow/releases/tag/v1.6.0; you can certainly try a newer version in the TensorFlow releases page, or clone the latest TensorFlow source by git clone https://github.com/tensorflow/tensorflow, and fix any possible hiccups. After cd to your TensorFlow source root, we run the following commands: tensorflow/contrib/makefile/download_dependencies.sh sudo apt-get install -y autoconf automake libtool gcc-4.8 g++-4.8 cd tensorflow/contrib/makefile/downloads/protobuf/ ./autogen.sh ./configure make CXX=g++-4.8 sudo make install sudo ldconfig # refresh shared library cache cd ../../../../.. export HOST_NSYNC_LIB=`tensorflow/contrib/makefile/compile_nsync.sh` export TARGET_NSYNC_LIB="$HOST_NSYNC_LIB" Make sure you run make CXX=g++-4.8, instead of just make, as documented in the official TensorFlow Makefile documentation, because Protobuf must be compiled with the same gcc version as that used for building the following TensorFlow library, in order to fix those "undefined reference to google::protobuf" errors. Now try to build the TensorFlow library using the following command: make -f tensorflow/contrib/makefile/Makefile HOST_OS=PI TARGET=PI \ OPTFLAGS="-Os -mfpu=neon-vfpv4 -funsafe-math-optimizations -ftree-vectorize" CXX=g++-4.8 After a few hours of building, you'll likely get an error such as "virtual memory exhausted: Cannot allocate memory" or the Pi board will just freeze due to running out of memory. To fix this, we need to set up a swap, because without the swap, when an application runs out of the memory, the application will get killed due to a kernel panic. There are two ways to set up a swap: swap file and swap partition. Raspbian uses a default swap file of 100 MB on the SD card, as shown here using the free command: pi@raspberrypi:~/tensorflow-1.6.0 $ free -h total used free shared buff/cache available Mem: 927M 45M 843M 660K 38M 838M Swap: 99M 74M 25M To improve the swap file size to 1 GB, modify the /etc/dphys-swapfile file via sudo vi /etc/dphys-swapfile, changing CONF_SWAPSIZE=100 to CONF_SWAPSIZE=1024, then restart the swap file service: sudo /etc/init.d/dphys-swapfile stop sudo /etc/init.d/dphys-swapfile start After this, free -h will show the Swap total to be 1.0 GB. A swap partition is created on a separate USB disk and is preferred because a swap partition can't get fragmented but a swap file on the SD card can get fragmented easily, causing slower access. To set up a swap partition, plug a USB stick with no data you need on it to the Pi board, then run sudo blkid, and you'll see something like this: /dev/sda1: LABEL="EFI" UUID="67E3-17ED" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="622fddad-da3c-4a09-b6b3-11233a2ca1f6" /dev/sda2: UUID="E67F-6EAB" TYPE="vfat" PARTLABEL="NO NAME" PARTUUID="a045107a-9e7f-47c7-9a4b-7400d8d40f8c" /dev/sda2 is the partition we'll use as the swap partition. Now unmount and format it to be a swap partition: sudo umount /dev/sda2 sudo mkswap /dev/sda2 mkswap: /dev/sda2: warning: wiping old swap signature. Setting up swapspace version 1, size = 29.5 GiB (31671701504 bytes) no label, UUID=23443cde-9483-4ed7-b151-0e6899eba9de You'll see a UUID output in the mkswap command; run sudo vi /etc/fstab, add a line as follows to the fstab file with the UUID value: UUID=<UUID value> none swap sw,pri=5 0 0 Save and exit the fstab file and then run sudo swapon -a. Now if you run free -h again, you'll see the Swap total to be close to the USB storage size. We definitely don't need all that size for swap—in fact, the recommended maximum swap size for the Raspberry Pi 3 board with 1 GB memory is 2 GB, but we'll leave it as is because we just want to successfully build the TensorFlow library. With either of the swap setting changes, we can rerun the make command: make -f tensorflow/contrib/makefile/Makefile HOST_OS=PI TARGET=PI \ OPTFLAGS="-Os -mfpu=neon-vfpv4 -funsafe-math-optimizations -ftree-vectorize" CXX=g++-4.8 After this completes, the TensorFlow library will be generated as tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a. Now we can build the image classification example using the library. Image recognition and text to speech There are two TensorFlow Raspberry Pi example apps (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/pi_examples) located in tensorflow/contrib/pi_examples: label_image and camera. We'll modify the camera example app to integrate text to speech so the app can speak out its recognized images when moving around. Before we build and test the two apps, we need to install some libraries and download the pre-built TensorFlow Inception model file: sudo apt-get install -y libjpeg-dev sudo apt-get install libv4l-dev curl https://storage.googleapis.com/download.tensorflow.org/models/inception_dec_2015_stripped.zip -o /tmp/inception_dec_2015_stripped.zip cd ~/tensorflow-1.6.0 unzip /tmp/inception_dec_2015_stripped.zip -d tensorflow/contrib/pi_examples/label_image/data/ To build the label_image and camera apps, run: make -f tensorflow/contrib/pi_examples/label_image/Makefile make -f tensorflow/contrib/pi_examples/camera/Makefile You may encounter the following error when building the apps: ./tensorflow/core/platform/default/mutex.h:25:22: fatal error: nsync_cv.h: No such file or directory #include "nsync_cv.h" ^ compilation terminated. To fix this, run sudo cp tensorflow/contrib/makefile/downloads/nsync/public/nsync*.h /usr/include. Then edit the tensorflow/contrib/pi_examples/label_image/Makefile or  tensorflow/contrib/pi_examples/camera/Makefile file, add the following library, and include paths before running the make command again: -L$(DOWNLOADSDIR)/nsync/builds/default.linux.c++11 \ -lnsync \ To test run the two apps, run the apps directly: tensorflow/contrib/pi_examples/label_image/gen/bin/label_image tensorflow/contrib/pi_examples/camera/gen/bin/camera Take a look at the C++ source code,  tensorflow/contrib/pi_examples/label_image/label_image.cc and tensorflow/contrib/pi_examples/camera/camera.cc, and you'll see they use the similar C++ code as in our iOS apps in the previous chapters to load the model graph file, prepare input tensor, run the model, and get the output tensor. By default, the camera example also uses the prebuilt Inception model unzipped in the label_image/data folder. But for your own specific image classification task, you can provide your own model retrained via transfer learning using the --graph parameter when running the two example apps. In general, voice is a Raspberry Pi robot's main UI to interact with us. Ideally, we should run a TensorFlow-powered natural-sounding Text-to-Speech (TTS) model such as WaveNet (https://deepmind.com/blog/wavenet-generative-model-raw-audio) or Tacotron (https://github.com/keithito/tacotron), but it'd be beyond the scope of this article to run and deploy such a model. It turns out that we can use a much simpler TTS library called Flite by CMU (http://www.festvox.org/flite), which offers pretty decent TTS, and it takes just one simple command to install it: sudo apt-get install flite. If you want to install the latest version of Flite to hopefully get a better TTS quality, just download the latest Flite source from the link and build it. To test Flite with our USB speaker, run flite with the -t parameter followed by a double quoted text string such as  flite -t "i recommend the ATM machine". If you don't like the default voice, you can find other supported voices by running flite -lv, which should return Voices available: kal awb_time kal16 awb rms slt. Then you can specify a voice used for TTS: flite -voice rms -t "i recommend the ATM machine". To let the camera app speak out the recognized objects, which should be the desired behavior when the Raspberry Pi robot moves around, you can use this simple pipe command: tensorflow/contrib/pi_examples/camera/gen/bin/camera | xargs -n 1 flite -t You'll likely hear too much voice. To fine tune the TTS result of image classification, you can also modify the camera.cc file and add the following code to the PrintTopLabels function before rebuilding the example using make -f tensorflow/contrib/pi_examples/camera/Makefile: std::string cmd = "flite -voice rms -t \""; cmd.append(labels[label_index]); cmd.append("\""); system(cmd.c_str()); Now that we have completed the image classification and speech synthesis tasks, without using any Cloud APIs, let's see how we can do audio recognition on Raspberry Pi. Audio recognition and robot movement To use the pre-trained audio recognition model in the TensorFlow tutorial (https://www.tensorflow.org/tutorials/audio_recognition), we'll reuse a listen.py Python script from https://gist.github.com/aallan, and add the GoPiGo API calls to control the robot movement after it recognizes four basic audio commands: "left," "right," "go," and "stop." The other six commands supported by the pre-trained model—"yes," "no," "up," "down," "on," and "off"—don't apply well in our example. To run the script, first download the pre-trained audio recognition model from http://download.tensorflow.org/models/speech_commands_v0.01.zip and unzip it to /tmp for example, to the Pi board's /tmp directory, then run: python listen.py --graph /tmp/conv_actions_frozen.pb --labels /tmp/conv_actions_labels.txt -I plughw:1,0 Or you can run: python listen.py --graph /tmp/speech_commands_graph.pb --labels /tmp/conv_actions_labels.txt -I plughw:1,0 Note that plughw value 1,0 should match the card number and device number of your USB microphone, which can be found using the arecord -l command we showed before. The listen.py script also supports many other parameters. For example, we can use --detection_threshold 0.5 instead of the default detection threshold 0.8. Let's now take a quick look at how listen.py works before we add the GoPiGo API calls to make the robot move. listen.py uses Python's subprocess module and its Popen class to spawn a new process of running the arecord command with appropriate parameters. The Popen class has an stdout attribute that specifies the arecord executed command's standard output file handle, which can be used to read the recorded audio bytes. The Python code to load the trained model graph is as follows: with tf.gfile.FastGFile(filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, name='') A TensorFlow session is created using tf.Session() and after the graph is loaded and session created, the recorded audio buffer gets sent, along with the sample rate, as the input data to the TensorFlow session's run method, which returns the prediction of the recognition: run(softmax_tensor, { self.input_samples_name_: input_data, self.input_rate_name_: self.sample_rate_ }) Here, softmax_tensor is defined as the TensorFlow graph's get_tensor_by_name(self.output_name_), and output_name_,  input_samples_name_, and input_rate_name_ are defined as  labels_softmax, decoded_sample_data:0, decoded_sample_data:1, respectively. On Raspberry Pi, you can choose to run the TensorFlow models on Pi using the TensorFlow Python API directly, or C++ API (as in the label_image and camera examples), although normally you'd still train the models on a more powerful computer. For the complete TensorFlow Python API documentation, see https://www.tensorflow.org/api_docs/python. To use the GoPiGo Python API to make the robot move based on your voice command, first add the following two lines to listen.py: import easygopigo3 as gpg gpg3_obj = gpg.EasyGoPiGo3() Then add the following code to the end of the def add_data method: if current_top_score > self.detection_threshold_ and time_since_last_top > self.suppression_ms_: self.previous_top_label_ = current_top_label self.previous_top_label_time_ = current_time_ms is_new_command = True logger.info(current_top_label) if current_top_label=="go": gpg3_obj.drive_cm(10, False) elif current_top_label=="left": gpg3_obj.turn_degrees(-30, False) elif current_top_label=="right": gpg3_obj.turn_degrees(30, False) elif current_top_label=="stop": gpg3_obj.stop() Now put your Raspberry Pi robot on the ground, connect to it with ssh from your computer, and run the following script: python listen.py --graph /tmp/conv_actions_frozen.pb --labels /tmp/conv_actions_labels.txt -I plughw:1,0 --detection_threshold 0.5 You'll see output like this: INFO:audio:started recording INFO:audio:_silence_ INFO:audio:_silence_ Then you can say left, right, stop, go, and stop to see the commands get recognized and the robot moves accordingly: INFO:audio:left INFO:audio:_silence_ INFO:audio:_silence_ INFO:audio:right INFO:audio:_silence_ INFO:audio:stop INFO:audio:_silence_ INFO:audio:go INFO:audio:stop You can run the camera app in a separate Terminal, so while the robot moves around based on your voice commands, it'll recognize new images it sees and speak out the results. That's all it takes to build a basic Raspberry Pi robot that listens, moves, sees, and speaks—what the Google I/O 2016 demo does but without using any Cloud APIs. It's far from a fancy robot that can understand natural human speech, engage in interesting conversations, or perform useful and non-trivial tasks. But powered with pre-trained, retrained, or other powerful TensorFlow models, and using all kinds of sensors, you can certainly add more and more intelligence and physical power to the Pi robot we have built. Google TensorFlow is used to train all the models deployed and running on mobile devices. This book covers 10 projects on the implementation of all major AI areas on iOS, Android, and Raspberry Pi: computer vision, speech and language processing, and machine learning, including traditional, reinforcement, and deep reinforcement. If you liked this tutorial and would like to implement projects for major AI areas on iOS, Android, and Raspberry Pi, check out the book Intelligent Mobile Projects with TensorFlow. TensorFlow 2.0 is coming. Here’s what we can expect. Build and train an RNN chatbot using TensorFlow [Tutorial] Use TensorFlow and NLP to detect duplicate Quora questions [Tutorial]
Read more
  • 0
  • 0
  • 24043

article-image-laracon-us-2019-highlights-laravel-6-release-update-laravel-vapor-and-more
Bhagyashree R
26 Jul 2019
5 min read
Save for later

Laracon US 2019 highlights: Laravel 6 release update, Laravel Vapor, and more

Bhagyashree R
26 Jul 2019
5 min read
Laracon US 2019, probably the biggest Laravel conference, wrapped up yesterday. Its creator, Tylor Otwell kick-started the event by talking about the next major release, Laravel 6. He also showcased a project that he has been working on called Laravel Vapor, a full-featured serverless management and deployment dashboard for PHP/Laravel. https://twitter.com/taylorotwell/status/1154168986180997125 This was a two-day event from July 24-25 hosted at Time Square, NYC. The event brings together people passionate about creating applications with Laravel, the open-source PHP web framework. Many exciting talks were hosted at this game-themed event. Evan You, the creator of Vue, was there presenting what’s coming in Vue.js 3.0. Caleb Porzio, a developer at Tighten Co., showcased a Laravel framework named Livewire that enables you to build dynamic user interfaces with vanilla PHP.  Keith Damiani, a Principal Programmer at Tighten, talked about graph database use cases. You can watch this highlights video compiled by Romega Digital to get a quick overview of the event: https://www.youtube.com/watch?v=si8fHDPYFCo&feature=youtu.be Laravel 6 coming next month Since its birth, Laravel has followed a custom versioning system. It has been on 5.x release version for the past four years now. The team has now decided to switch to semantic versioning system. The framework currently stands at version 5.8, and instead of calling the new release 5.9 the team has decided to go with Laravel 6, which is scheduled for the next month. Otwell emphasized that they have decided to make this change to bring more consistency in the ecosystem as all optional Laravel components such as Cashier, Dusk, Valet, Socialite use semantic versioning. This does not mean that there will be any “paradigm shift” and developers have to rewrite their stuff. “This does mean that any breaking change would necessitate a new version which means that the next version will be 7.0,” he added. With the new version comes new branding Laravel gets a fresh look with every major release ranging from updated logos to a redesigned website. Initially, this was a volunteer effort with different people pitching in to help give Laravel a fresh look. Now that Laravel has got some substantial backing, Otwell has worked with Focus Lab, one of the top digital design agencies in the US. They have together come up with a new logo and a brand new website. The website looks easy to navigate and also provides improved documentation to give developers a good reading experience. Source: Laravel Laravel Vapor, a robust serverless deployment platform for Laravel After giving a brief on version 6 and the updated branding, Otwell showcased his new project named Laravel Vapor. Currently, developers use Forge for provisioning and deploying their PHP applications on DigitalOcean, Linode, AWS, and more. It provides painless Virtual Private Server (VPS) management. It is best suited for medium and small projects and performs well with basic load balancing. However, it does lack a few features that could have been helpful for building bigger projects such as autoscaling. Also, developers have to worry about updating their operating systems and PHP versions. To address these limitations, Otwell created this deployment platform. Here are some of the advantages Laravel Vapor comes with: Better scalability: Otwell’s demo showed that it can handle over half a million requests with an average response time of 12 ms. Facilitates collaboration: Vapor is built around teams. You can create as many teams as you require by just paying for one single plan. Fine-grained control: It gives you fine-grained control over what each team member can do. You can set what all they can do across all the resources Vapor manages. A “vanity URL” for different environments: Vapor gives you a staging domain, which you can access with what Otwell calls a “vanity URL.” It enables you to immediately access your applications with “a nice domain that you can share with your coworkers until you are ready to assign a custom domain,“ says Otwell. Environment metrics: Vapor provides many environment metrics that give you an overview of an application environment. These metrics include how many HTTP requests have the application got in the last 24 hours, how many CLI invocations, what’s the average duration of those things, how much these cost on lambda, and more. Logs: You can review and search your recent logs right from the Vapor UI. It also auto-updates when any new entry comes in the log. Databases: With Vapor, you can provision two types of databases: fixed-sized database and serverless database. The fixed-sized database is the one where you have to pick its specifications like VCPU, RAM, etc. In the serverless one, however, if you do not select these specifications and it will automatically scale according to the demand. Caches: You can create Redis clusters right from the Vapor UI with as many nodes as you want. It supports the creation and management of elastic Redis cache clusters, which can be scaled without experiencing any downtime. You can attach them to any of the team’s projects and use them with multiple projects at the same time. To watch the entire demonstration by Otwell check out this video: https://www.youtube.com/watch?v=XsPeWjKAUt0&feature=youtu.be Laravel 5.7 released with support for email verification, improved console testing Building a Web Service with Laravel 5 Symfony leaves PHP-FIG, the framework interoperability group
Read more
  • 0
  • 0
  • 24029

article-image-four-ibm-facial-recognition-patents-in-2018-we-found-intriguing
Natasha Mathur
11 Aug 2018
10 min read
Save for later

Four IBM facial recognition patents in 2018, we found intriguing

Natasha Mathur
11 Aug 2018
10 min read
The media has gone into a frenzy over Google’s latest facial recognition patent that shows an algorithm can track you across social media and gather your personal details. We thought, we’d dive further into what other patents Google has applied for in facial recognition tehnology in 2018. What we discovered was an eye opener (pun intended). Google is only the 3rd largest applicant with IBM and Samsung leading the patents race in facial recognition. As of 10th Aug, 2018, 1292 patents have been granted in 2018 on Facial recognition. Of those, IBM received 53. Here is the summary comparison of leading companies in facial recognition patents in 2018. Read Also: Top four Amazon patents in 2018 that use machine learning, AR, and robotics IBM has always been at the forefront of innovation. Let’s go back about a quarter of a century, when IBM invented its first general-purpose computer for business. It built complex software programs that helped in launching Apollo missions, putting the first man on the moon. It’s chess playing computer, Deep Blue, back in 1997,  beat Garry Kasparov, in a traditional chess match (the first time a computer beat a world champion). Its researchers are known for winning Nobel Prizes. Coming back to 2018, IBM unveiled the world’s fastest supercomputer with AI capabilities, and beat the Wall Street expectations by making $20 billion in revenue in Q3 2018 last month, with market capitalization worth $132.14 billion as of August 9, 2018. Its patents are a major part of why it continues to be valuable highly. IBM continues to come up with cutting-edge innovations and to protect these proprietary inventions, it applies for patent grants. United States is the largest consumer market in the world, so patenting the technologies that the companies come out with is a standard way to attain competitive advantage. As per the United States Patent and Trademark Office (USPTO), Patent is an exclusive right to invention and “the right to exclude others from making, using, offering for sale, or selling the invention in the United States or “importing” the invention into the United States”. As always, IBM has applied for patents for a wide spectrum of technologies this year from Artificial Intelligence, Cloud, Blockchain, Cybersecurity, to Quantum Computing. Today we focus on IBM’s patents in facial recognition field in 2018. Four IBM facial recognition innovations patented in 2018 Facial recognition is a technology which identifies and verifies a person from a digital image or a video frame from a video source and IBM seems quite invested in it. Controlling privacy in a face recognition application Date of patent: January 2, 2018 Filed: December 15, 2015 Features: IBM has patented for a face-recognition application titled “Controlling privacy in a face recognition application”. Face recognition technologies can be used on mobile phones and wearable devices which may hamper the user privacy. This happens when a "sensor" mobile user identifies a "target" mobile user without his or her consent. The present mobile device manufacturers don’t provide the privacy mechanisms for addressing this issue. This is the major reason why IBM has patented this technology. Editor’s Note: This looks like an answer to the concerns raised over Google’s recent social media profiling facial recognition patent.   How it works? Controlling privacy in a face recognition application It consists of a privacy control system, which is implemented using a cloud computing node. The system uses a camera to find out information about the people, by using a face recognition service deployed in the cloud. As per the patent application “the face recognition service may have access to a face database, privacy database, and a profile database”. Controlling privacy in a face recognition application The facial database consists of one or more facial signatures of one or more users. The privacy database includes privacy preferences of target users. Privacy preferences will be provided by the target user and stored in the privacy database.The profile database contains information about the target user such as name, age, gender, and location. It works by receiving an input which includes a face recognition query and a digital image of a face. The privacy control system then detects a facial signature from the digital image. The target user associated with the facial signature is identified, and profile of the target user is extracted. It then checks the privacy preferences of the user. If there are no privacy preferences set, then it transmits the profile to the sensor user. But, if there are privacy preferences then the censored profile of the user is generated omitting out the private elements in the profile. There are no announcements, as for now, regarding when this technology will hit the market. Evaluating an impact of a user's content utilized in a social network Date of patent: January 30, 2018 Filed: April 11, 2015 Features:  IBM has patented for an application titled “Evaluating an impact of a user's content utilized in a social network”.  With so much data floating around on social network websites, it is quite common for the content of a document (e.g., e-mail message, a post, a word processing document, a presentation) to be reused, without the knowledge of an original author. Evaluating an impact of a user's content utilised in a social network Evaluating an impact of a user's content utilized in a social network Because of this, the original author of the content may not receive any credit, which creates less motivation for the users to post their original content in a social network. This is why IBM has decided to patent for this application. Evaluating an impact of a user's content utilized in a social network As per the patent application, the method/system/product  “comprises detecting content in a document posted on a social network environment being reused by a second user. The method further comprises identifying an author of the content. The method additionally comprises incrementing a first counter keeping track of a number of times the content has been adopted in derivative works”. There’s a processor, which generates an “impact score” which  represents the author's ability to influence other users to adopt the content. This is based on the number of times the content has been adopted in the derivative works. Also, “the method comprises providing social credit to the author of the content using the impact score”. Editor’s Note: This is particularly interesting to us as IBM, unlike other tech giants, doesn’t own a popular social network or media product. (Google has Google+, Microsoft has LinkedIn, Facebook and Twitter are social, even Amazon has stakes in a media entity in the form of Washington Post). No information is present about when or if this system will be used among social network sites. Spoof detection for facial recognition Date of patent: February 20, 2018 Filed: December 10, 2015 Features: IBM patented an application named “Spoof detection for facial recognition”.  It provides a method to determine whether the image is authentic or not. As per the patent “A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source.” Editor’s Note: This seems to have a direct impact on the work around tackling deepFakes, which incidentally is something DARPA is very keen on. Could IBM be vying for a long term contract with the government? How it works? The patent consists of a system that helps detect “if a face in a facial recognition authentication system is a three-dimensional structure based on multiple selected images from the input video”.                                      Spoof detection for facial recognition There are four or more two-dimensional feature points which are located via an image processing device connected to the camera. Here the two-dimensional feature points do not lie on the same two-dimensional plane. The patent reads that “one or more additional images of the user's face can be received with the camera; and, the at least four two-dimensional feature points can be located on each additional image with the image processor. The image processor can identify displacements between the two-dimensional feature points on the additional image and the two-dimensional feature points on the first image for each additional image” Spoof detection for facial recognition There is also a processor connected to the image processing device that helps figure out whether the displacements conform to a three-dimensional surface model. The processor can then determine whether to authenticate the user depending on whether the displacements conform to the three-dimensional surface model. Facial feature location using symmetry line Date of patent: June 5, 2018 Filed: July 20, 2015 Features: IBM patented for an application titled “Facial feature location using symmetry line”. As per the patent, “In many image processing applications, identifying facial features of the subject may be desired. Currently, location of facial features require a search in four dimensions using local templates that match the target features. Such a search tends to be complex and prone to errors because it has to locate both (x, y) coordinates, scale parameter and rotation parameter”. Facial feature location using symmetry line Facial feature location using symmetry line The application consists of a computer-implemented method that obtains an image of the subject’s face. After that it automatically detects a symmetry line of the face in the image, where the symmetry line intersects at least a mouth region of the face. It then automatically locates a facial feature of the face using the symmetry line. There’s also a computerised apparatus with a processor which performs the steps of obtaining an image of a subject’s face and helps locate the facial feature.  Editor’s note: Atleast, this patent makes direct sense to us. IBM is majorly focusing on bring AI to healthcare. A patent like this can find a lot of use in not just diagnostics and patient care, but also in cutting edge areas like robotics enabled surgeries. IBM is continually working on new technologies to provide the world with groundbreaking innovations. Its big investments in facial recognition technology speaks volumes about how IBM is well-versed with its endless possibilities. With the facial recognition technological progress,  come the privacy fears. But, IBM’s facial recognition application patent has got it covered as it lets the users set privacy preferences. This can be a great benchmark for IBM as no many existing applications are currently doing it. The social credit score evaluating app can really help bring the voice back to the users interested in posting content on social media platforms. The spoof detection application will help maintain authenticity by detecting forged images. Lastly, the facial feature detection can act as a great additional feature for image processing applications. IBM has been heavily investing in facial recognition technology. There are no guarantees by IBM as to whether these patents will ever make it to practical applications, but it does say a lot about how IBM thinks about the technology. Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics Facebook patents its news feed filter tool to provide more relevant news to its users Google’s new facial recognition patent uses your social network to identify you!  
Read more
  • 0
  • 0
  • 24018

article-image-building-your-own-snapchat-like-ar-filter-on-android-using-tensorflow-lite-tutorial
Natasha Mathur
31 Dec 2018
13 min read
Save for later

Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ]

Natasha Mathur
31 Dec 2018
13 min read
Augmented Reality (AR) filters that are used on applications such as Snapchat and Instagram have gained worldwide popularity. This tutorial is an excerpt taken from the book 'Machine Learning Projects for Mobile Applications' written by Karthikeyan NG. In this tutorial,  we will look at how you can build your own Augmented Reality (AR) filter using TensorFlow Lite, a platform that allows you to run machine learning models on mobile and embedded devices.  With this application, we will place AR filters on top of a real-time camera view. Using AR filters, we can add a mustache to a male's facial key point, and we can add a relevant emotional expression on top of the eyes. The TensorFlow Lite model is used to detect gender and emotion from the camera view. We will be looking at concepts such as MobileNet models and building the dataset required for model conversion before looking at how to build the Android application. MobileNet models We use the MobileNet model to identify gender, while the AffectNet model is used to detect emotion. Facial key point detection is achieved using Google's Mobile Vision API. TensorFlow offers various pre-trained models, such as drag and drop models, in order to identify approximately 1,000 default objects. When compared with other similar models such as the Inception model datasets, MobileNet works better with latency, size, and accuracy. In terms of output performance, there is a significant amount of lag, with a full-fledged model. However, the trade-off is acceptable when the model is deployable on a mobile device and for real-time offline model detection. The MobileNet architecture deals with a 3 x 3 convolution layer in a different way from a typical CNN. For a more detailed explanation of the MobileNet architecture, please visit https://arxiv.org/pdf/1704.04861.pdf. Let's look at an example of how to use MobileNet. Let's not build one more generic dataset in this case. Instead, we will write a simple classifier to find Pikachu in an image. The following are sample pictures showing an image of Pikachu and an image without Pikachu: Building the dataset To build our own classifier, we need to have datasets that contain images with and without Pikachu. You can start with 1,000 images on each database and you can pull down such images here: https://search.creativecommons.org/. Let's create two folders named pikachu and no-pikachu and drop those images accordingly. Always ensure that you have the appropriate licenses to use any images, especially for commercial purposes. Image scrapper from the Google and Bing API: https://github.com/rushilsrivastava/image_search. Now we have an image folder, which is structured as follows: /dataset/ /pikachu/[image1,..] /no-pikachu/[image1,..] Retraining of images  We can now start labeling our images. With TensorFlow, this job becomes easier. Assuming that you have installed TensorFlow already, download the following retraining script: curl https://github.com/tensorflow/hub/blob/master/examples/ image_retraining/retrain.py Let's retrain the image with the Python script now: python retrain.py \ --image_dir ~/MLmobileapps/Chapter5/dataset/ \ --learning_rate=0.0001 \ --testing_percentage=20 \ --validation_percentage=20 \ --train_batch_size=32 \ --validation_batch_size=-1 \ --eval_step_interval=100 \ --how_many_training_steps=1000 \ --flip_left_right=True \ --random_scale=30 \ --random_brightness=30 \ --architecture mobilenet_1.0_224 \ --output_graph=output_graph.pb \ --output_labels=output_labels.txt If you set validation_batch_size to -1, it will validate the whole dataset; learning_rate = 0.0001 works well. You can adjust and try this for yourself. In the architecture flag, we choose which version of MobileNet to use, from versions 1.0, 0.75, 0.50, and 0.25. The suffix number 224 represents the image resolution. You can specify 224, 192, 160, or 128 as well. Model conversion from GraphDef to TFLite TocoConverter is used to convert from a TensorFlow GraphDef file or SavedModel into either a TFLite FlatBuffer or graph visualization. TOCO stands for TensorFlow Lite Optimizing Converter. We need to pass the data through command-line arguments. There are a few command-line arguments listed in the following with TensorFlow 1.10.0: --output_file OUTPUT_FILE Filepath of the output tflite model. --graph_def_file GRAPH_DEF_FILE Filepath of input TensorFlow GraphDef. --saved_model_dir Filepath of directory containing the SavedModel. --keras_model_file Filepath of HDF5 file containing tf.Keras model. --output_format {TFLITE,GRAPHVIZ_DOT} Output file format. --inference_type {FLOAT,QUANTIZED_UINT8} Target data type in the output --inference_input_type {FLOAT,QUANTIZED_UINT8} Target data type of real-number input arrays. --input_arrays INPUT_ARRAYS Names of the input arrays, comma-separated. --input_shapes INPUT_SHAPES Shapes corresponding to --input_arrays, colon-separated. --output_arrays OUTPUT_ARRAYS Names of the output arrays, comma-separated. We can now use the toco tool to convert the TensorFlow model into a TensorFlow Lite model: toco \ --graph_def_file=/tmp/output_graph.pb --output_file=/tmp/optimized_graph.tflite --input_arrays=Mul --output_arrays=final_result --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --input_shape=1,${224},${224},3 --inference_type=FLOAT --input_data_type=FLOAT Similarly, we have two model files used in this application: the gender model and emotion model. These will be explained in the following two sections. To convert ML models in TensorFlow 1.9.0 to TensorFlow 1.11.0, use TocoConverter. TocoConverter is semantically identically to TFLite Converter. To convert models prior to TensorFlow 1.9, use the toco_convert function. Run help(tf.contrib.lite.toco_convert) to get details about acceptable parameters. Gender model This is built on the IMDB WIKI dataset, which contains 500k+ celebrity faces. It uses the MobileNet_V1_224_0.5 version of MobileNet. The link to the data model project can be found here: https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/. It is very rare to find public datasets with thousands of images. This dataset is built on top of a large collection of celebrity faces. There are two common places: one is IMDb and the other one is Wikipedia. More than 100K celebrities' details were retrieved from their profiles from both sources through scripts. Then it was organized by removing noises(irrelevant content). Emotion model This is built on the AffectNet model with more than 1 million images. It uses the MobileNet_V2_224_1.4 version of MobileNet. The link to the data model project can be found here: http://mohammadmahoor.com/affectnet/. The AffectNet model is built by collecting and annotating facial images of more than 1 million faces from the internet. The images were sourced from three search engines, using around 1,250 related keywords in six different languages. Comparison of MobileNet versions In both of our models, we use different versions of MobileNet models. MobileNet V2 is mostly an updated version of V1 that makes it even more efficient and powerful in terms of performance. We will see a few factors between both the models: The picture above shows the numbers from MobileNet V1 and V2 belong to the model versions with 1.0 depth multiplier. It is better if the numbers are lower in this table. By seeing the results we can assume that V2 is almost twice as fast as V1 model. On a mobile device when memory access is limited than the computational capability V2 works very well. MACs—multiply-accumulate operations. This measures how many calculations are needed to perform inference on a single 224×224 RGB image. When the image size increases more MACs are required. From the number of MACs alone, V2 should be almost twice as fast as V1. However, it’s not just about the number of calculations. On mobile devices, memory access is much slower than computation. But here V2 has the advantage too: it only has 80% of the parameter count that V1 has. Now, let's look into the performance in terms of accuracy: The figure shown above is tested on ImageNet dataset. These numbers can be misleading as it depends on all the constraints that is taken into account while deriving these numbers. The IEEE paper behind the model can be found here: http://mohammadmahoor.com/wp-content/uploads/2017/08/AffectNet_oneColumn-2.pdf. Building the Android application Now create a new Android project from Android Studio. This should be called ARFilter, or whatever name you prefer: On the next screen, select the Android OS versions that our application supports and select API 15 which is not shown on the image. That covers almost all existing Android phones. When you are ready, press Next. On the next screen, select Add No Activity and click Finish. This creates an empty project: Once the project is created, let's add one Empty Activity. We can select different activity styles based on our needs: Name the created activity Launcher Activity by selecting the checkbox. This adds an intent filter under the particular activity in the AndroidManifest.xml file: <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter>: To advertise which implicit intents your app can receive, declare one or more intent filters for each of your app components with an <intent-filter> element in your manifest file. Each intent filter specifies the type of intents it accepts based on the intent's action, data, and category. The system delivers an implicit intent to your app component only if the intent can pass through one of your intent filters. Here, the intent is to keep this activity as the first activity when the app is opened by the user. Next, we will name the launcher activity: Once the activity is created, let's start designing the user interface (UI) layout for the activity. Here, the user selects which model to utilize in this application. We have two models for gender and emotion detection, whose details we discussed earlier. In this activity, we will add two buttons and their corresponding model classifiers, shown as follows: With the selection of the corresponding model, we will launch the next activity accordingly using a clickListener event with the ModelSelectionActivity class as follows. Based on the clicks on the buttons on gender identification or emotion identification, we will pass on the information to the ARFilterActivity. So that the corresponding model will be loaded into memory: @Override public void onClick(View view) { int id = view.getId(); if(id==R.id.genderbtn){ Intent intent = new Intent(this, ARFilterActivity.class); intent.putExtra(ARFilterActivity.MODEL_TYPE,"gender"); startActivity(intent); } else if(id==R.id.emotionbtn){ Intent intent = new Intent(this,ARFilterActivity.class); intent.putExtra(ARFilterActivity.MODEL_TYPE,"emotion"); startActivity(intent); } } Intent: An Intent is a messaging object you can use to request an action from another app component. Although intents facilitate communication between components in several ways, there are three fundamental use cases such as starting an Activity, starting a service and delivering a broadcast. In ARFilterActivity, we will have the real-time view classification. The object that has been passed on will be received inside the filter activity, where the corresponding classifier will be invoked as follows. Based on the classifier selected from the previous activity, the corresponding model will be loaded into ARFilterActivity inside the OnCreate() method as shown as follows: public static String classifierType(){ String type = mn.getIntent().getExtras().getString("TYPE"); if(type!=null) { if(type.equals("gender")) return "gender"; else return "emotion"; } else return null; } The UI will be designed accordingly in order to display the results in the bottom part of the layout via the activity_arfilter layout as follows. CameraSourcePreview initiates the Camera2 API for a view inside that we will add GraphicOverlay class. It is a view which renders a series of custom graphics to be overlayed on top of an associated preview (that is the camera preview). The creator can add graphics objects, update the objects, and remove them, triggering the appropriate drawing and invalidation within the view. It supports scaling and mirroring of the graphics relative the camera's preview properties. The idea is that detection item is expressed in terms of a preview size but need to be scaled up to the full view size, and also mirrored in the case of the front-facing camera: <com.mlmobileapps.arfilter.CameraSourcePreview android:id="@+id/preview" android:layout_width="wrap_content" android:layout_height="wrap_content"> <com.mlmobileapps.arfilter.GraphicOverlay android:id="@+id/faceOverlay" android:layout_width="match_parent" android:layout_height="match_parent" /> </com.mlmobileapps.arfilter.CameraSourcePreview> We use the CameraPreview class from the Google open source project and the CAMERA object needs user permission based on different Android API levels: Link to Google camera API: https://github.com/googlesamples/android-Camera2Basic. Once we have the Camera API ready, we need to have the appropriate permission from the user side to utilize the camera as shown below. We need these following permissions: Manifest.permission.CAMERA Manifest.permission.WRITE_EXTERNAL_STORAGE private void requestPermissionThenOpenCamera() { if(ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) { if (ContextCompat.checkSelfPermission(context, Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED) { Log.e(TAG, "requestPermissionThenOpenCamera: "+Build.VERSION.SDK_INT); useCamera2 = (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP); createCameraSourceFront(); } else { ActivityCompat.requestPermissions(this, new String[] {Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_STORAGE_PERMISSION); } } else { ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA}, REQUEST_CAMERA_PERMISSION); } } With this, we now have an application that has a screen where we can choose which model to load. On the next screen, we have the camera view ready. We now have to load the appropriate model, detect the face on the screen, and apply the filter accordingly. Face detection on the real camera view is done through the Google Vision API. This can be added on your build.gradle as a dependency as follows. You should always use the latest version of the api: api 'com.google.android.gms:play-services-vision:15.0.0' The image classification object is initialized inside the OnCreate() method of the ARFilterActivity and inside the ImageClassifier class. The corresponding model is loaded based on user selection as follows: private void initPaths(){ String type = ARFilterActivity.classifierType(); if(type!=null) { if(type.equals("gender")){ MODEL_PATH = "gender.lite"; LABEL_PATH = "genderlabels.txt"; } else{ MODEL_PATH = "emotion.lite"; LABEL_PATH = "emotionlabels.txt"; } } } Once the model is decided, we will read the file and load them into memory. Thus in this article, we looked at concepts such as mobile net models and building the dataset required for the model application, we then looked at how to build a Snapchat-like AR filter.  If you want to know the further steps to build AR filter such as loading the model, and so on, be sure to check out the book  'Machine Learning Projects for Mobile Applications'. Snapchat source code leaked and posted to GitHub Snapchat is losing users – but revenue is up 15 year old uncovers Snapchat’s secret visual search function
Read more
  • 0
  • 0
  • 24006
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Bhagyashree R
13 Mar 2019
12 min read
Save for later

Building a Progressive Web Application with Create React App 2 [Tutorial]

Bhagyashree R
13 Mar 2019
12 min read
The beauty of building a modern web application is being able to take advantage of functionalities such as a Progressive Web App (PWA)! But they can be a little complicated to work with. As always, the Create React App tool makes a lot of this easier for us but does carry some significant caveats that we'll need to think about. This article is taken from the book  Create React App 2 Quick Start Guide by Brandon Richey. This book is intended for those that want to get intimately familiar with the Create React App tool. It covers all the commands in Create React App and all of the new additions in version 2.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will learn what exactly PWAs are and how we can configure our Create React App project into a custom PWA. We will also explore service workers, their life cycle, and how to use them with Create React App. Understanding and building PWAs Let's talk a little bit about what a PWA is because there is, unfortunately, a lot of misinformation and confusion about precisely what a PWA does! In very simple words, it's simply a website that does the following: Only uses HTTPS Adds a JSON manifest (a web app manifest) file Has a Service Worker A PWA, for us, is a React application that would be installable/runnable on a mobile device or desktop. Essentially, it's just your app, but with capabilities that make it a little more advanced, a little more effective, and a little more resilient to poor/no internet. A PWA accomplishes these via a few tenets, tricks, and requirements that we'd want to follow: The app must be usable by mobile and desktop-users alike The app must operate over HTTPS The app must implement a web app JSON manifest file The app must implement a service worker Now, the first one is a design question. Did you make your design responsive? If so, congratulations, you built the first step toward having a PWA! The next one is also more of an implementation question that's maybe not as relevant to us here: when you deploy your app to production, did you make it HTTPS only? I hope the answer to this is yes, of course, but it's still a good question to ask! The next two, though, are things we can do as part of our Create React App project, and we'll make those the focus of this article. Building a PWA in Create React App Okay, so we identified the two items that we need to build to make this all happen: the JSON manifest file and the service worker! Easy, right? Actually, it's even easier than that. You see, Create React App will populate a JSON manifest file for us as part of our project creation by default. That means we have already completed this step! Let's celebrate, go home, and kick off our shoes, because we're all done now, right? Well, sort of. We should take a look at that default manifest file because it's very unlikely that we want our fancy TodoList project to be called "Create React App Sample". Let's take a look at the manifest file, located in public/manifest.json: { "short_name": "React App", "name": "Create React App Sample", "icons": [ { "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon" } ], "start_url": ".", "display": "standalone", "theme_color": "#000000", "background_color": "#ffffff" } Some of these keys are pretty self-explanatory or at least have a little bit of information that you can infer from them as to what they accomplish. Some of the other keys, though, might be a little stranger. For example, what does "start_url" mean? What are the different options we can pick for display? What's a "theme_color" or "background_color"? Aren't those just decided by the CSS of our application? Not really. Let's dive deeper into this world of JSON manifest files and turn it into something more useful! Viewing our manifest file in action with Chrome First, to be able to test this, we should have something where we can verify the results of our changes. We'll start off with Chrome, where if you go into the Developer tools section, you can navigate to the Application tab and be brought right to the Service Workers section! Let's take a look at what it all looks like for our application: Exploring the manifest file options Having a manifest file with no explanation of what the different keys and options mean is not very helpful. So, let's learn about each of them, the different configuration options available to us, and some of the possible values we could use for each. name and short_name The first key we have is short_name. This is a shorter version of the name that might be displayed when, for example, the title can only display a smaller bit of text than the full app or site name. The counterpart to this is name, which is the full name of your application.  For example: { "short_name": "Todos", "name": "Best Todoifier" } icons Next is the icons key, which is a list of sub-objects, each of which has three keys. This contains a list of icons that the PWA should use, whether it's for displaying on someone's desktop, someone's phone home screen, or something else. Each "icon" object should contain an "src", which is a link to the image file that will be your icon. Next, you have the "type" key, which should tell the PWA what type of image file you're working with. Finally, we have the "sizes" key, which tells the PWA the size of the icon. For best results, you should have at least a "512x512" and a "192x192" icon. start_url The start_url key is used to tell the application at what point it should start in your application in relation to your server. While we're not using it for anything as we have a single page, no route app, that might be different in a much larger application, so you might just want the start_url key to be something indicating where you want them to start off from. Another option would be to add a query string on to the end of url, such as a tracking link. An example of that would be something like this: { "start_url": "/?source=AB12C" } background_color This is the color used when a splash screen is displayed when the application is first launched. This is similar to when you launch an application from your phone for the first time; that little page that pops up temporarily while the app loads is the splash screen, and background_color would be the background of that. This can either be a color name like you'd use in CSS, or it can be a hex value for a color. display The display key affects the browser's UI when the application is launched. There are ways to make the application full-screen, to hide some of the UI elements, and so on. Here are the possible options, with their explanations: ValueDescription.browserA normal web browser experience.fullscreenNo browser UI, and takes up the entire display.standaloneMakes the web app look like a native application. It will run in its own window and hides a lot of the browser UI to make it look and feel more native. orientation If you want to make your application in the landscape orientation, you would specify it here. Otherwise, you would leave this option missing from your manifest: { "orientation": "landscape" } scope Scope helps to determine where the PWA in your site lies and where it doesn't. This prevents your PWA from trying to load things outside where your PWA runs. start_url must be located inside your scope for it to work properly! This is optional, and in our case, we'll be leaving it out. theme_color This sets the color of the toolbar, again to make it feel and look a little more native. If we specify a meta-theme color, we'd set this to be the same as that specification. Much like background color, this can either be a color name, as you'd use in CSS, or it can be a hex value for a color. Customizing our manifest file Now that we're experts on manifest files, let's customize our manifest file! We're going to change a few things here and there, but we won't make any major changes. Let's take a look at how we've set up the manifest file in public/manifest.json: { "short_name": "Todos", "name": "Best Todoifier", "icons": [ { "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon" } ], "start_url": "/", "display": "standalone", "theme_color": "#343a40", "background_color": "#a5a5f5" } So we've set our short_name and name keys to match the actual application. We've left the icons key alone completely since we don't really need to do much of anything with that anyway. Next, we've changed start_url to just be "/", since we're working under the assumption that this application is the only thing running on its domain. We've set the display key to standalone since we want our application to have the ability to be added to someone's home screen and be recognized as a true PWA. Finally, we set the theme color to #343a40, which matches the color of the nav bar and will give a more seamless look and feel to the PWA. We also set the background_color key, which is for our splash screen, to #a5a5f5, which is the color of our normal Todo items! If you think back to the explanation of keys, you'll remember we also need to change our meta-theme tag in our public/index.html file, so we'll open that up and quickly make that change: <meta name="theme-color" content="#343a40" /> And that's it! Our manifest file has been customized! If we did it all correctly, we should be able to verify the changes again in our Chrome Developer tools: Hooking up service workers Service workers are defined as a script that your browser runs behind the scenes, separate from the main browser threads. It can intercept network requests, interact with a cache (either storing or retrieving information from a cache), or listen to and deliver push messages. The service worker life cycle The life cycle for a service worker is pretty simple. There are three main stages: Registration Installation Activation Registration is the process of letting the browser know where the service worker is located and how to install it into the background. The code for registration may look something like this: if ('serviceWorker' in navigator) { navigator.serviceWorker.register('/service-worker.js') .then(registration => { console.log('Service Worker registered!'); }) .catch(error => { console.log('Error registering service worker! Error is:', error); }); } Installation is the process that happens after the service worker has been registered, and only happens if the service worker either hasn't already been installed, or the service worker has changed since the last time. In a service-worker.js file, you'd add something like this to be able to listen to this event: self.addEventListener('install', event => { // Do something after install }); Finally, Activation is the step that happens after all of the other steps have completed. The service worker has been registered and then installed, so now it's time for the service worker to start doing its thing: self.addEventListener('activate', event => { // Do something upon activation }); How can we use a service worker in our app? So, how do we use a service worker in our application? Well, it's simple to do with Create React App, but there is a major caveat: you can't configure the service-worker.js file generated by Create React App by default without ejecting your project! Not all is lost, however; you can still take advantage of some of the highlights of PWAs and service workers by using the default Create React App-generated service worker. To enable this, hop over into src/index.js, and, at the final line, change the service worker unregister() call to register() instead: serviceWorker.register(); And now we're opting into our service worker! Next, to actually see the results, you'll need to run the following: $ yarn build We'll create a Production build. You'll see some output that we'll want to follow as part of this: The build folder is ready to be deployed. You may serve it with a static server: yarn global add serve serve -s build As per the instructions, we'll install serve globally, and run the command as instructed: $ serve -s build We will get the following output: Now open up http://localhost:5000 in your local browser and you'll be able to see, again in the Chrome Developer tools, the service worker up and running for your application: Hopefully, we've explored at least enough of PWAs that they have been partially demystified! A lot of the confusion and trouble with building PWAs tends to stem from the fact that there's not always a good starting point for building one. Create React App limits us a little bit in how we can implement service workers, which admittedly limits the functionality and usefulness of our PWA. It doesn't hamstring us, by any means, but doing fun tricks with pre-caching networks and API responses, and loading up our application instantly, even if the browser doing the loading is offline in the first place. That being said, it's like many other things in Create React App: an amazing stepping stone and a great way to get moving with PWAs in the future! If you found this post useful, do check out the book, Create React App 2 Quick Start Guide. In addition to getting familiar with Create React App 2, you will also build modern, React projects with, SASS, and progressive web applications. ReactOS 0.4.11 is now out with kernel improvements, manifests support, and more! React Native community announce March updates, post sharing the roadmap for Q4 React Native Vs Ionic: Which one is the better mobile app development framework?
Read more
  • 0
  • 0
  • 23992

article-image-implementing-identity-security-in-microsoft-azure-tutorial
Savia Lobo
28 Sep 2018
13 min read
Save for later

Implementing Identity Security in Microsoft Azure [Tutorial]

Savia Lobo
28 Sep 2018
13 min read
Security boundaries are often thought of as firewalls, IPS/IDS systems, or routers. Also, logical setups such as DMZs or other network constructs are often referred to as boundaries. But in the modern world, where many companies support dynamic work models that allow you to bring your own device (BYOD) or are heavily using online services for their work, the real boundary is the identity. Today's tutorial is an excerpt taken from the book, Implementing Azure Solutions written by Florian Klaffenbach, Jan-Henrik Damaschke, and Oliver Michalski. In this book, you will learn how to secure a newly deployed Azure Active Directory and also learn how Azure Active Directory Synchronization could be implemented. In this post, you will learn how to secure identities on Microsoft Azure. Identities are often the target of hackers as they resell them or use them to steal information. To make the attacks as hard as possible, it's important to have a well-conceived and centralized identity management. The Azure Active Directory provides that and a lot more to support your security strategy and simplify complex matters such as monitoring of privileged accounts or authentication attacks. Azure Active Directory Azure Active Directory (AAD) is a very important service that many other services are based on. It's not a directory service like many may think of when they hear the name Active Directory. The AAD is a complex structure without built-in Organizational Units (OUs) or Group Policy Objects (GPOs), but with a very high extensibility, open web standards for authorization and authentication, and a modern, (hybrid) cloud-focused approach to identity management. Azure Active Directory editions The following table describes the differences between the four Azure Active Directory editions: Services Common Basic Premium P1 Premium P2 Directory Objects X X X X User/Group Management (add/update/delete)/ User-based provisioning, Device registration X X X X Single Sign-On (SSO) X X X X Self-Service Password Change for cloud users X X X X Connect (Sync engine that extends on-premises directories to Azure Active Directory) X X X X Security/Usage Reports X X X X Group-based access management/ provisioning X X X Self-Service Password Reset for cloud users X X X Company Branding (Logon Pages/Access Panel customization) X X X Application Proxy X X X SLA 99.9% X X X Self-Service Group and app Management/Self-Service application additions/Dynamic Groups X X Self-Service Password Reset/Change/Unlock with on-premises write-back X X Multi-factor authentication (Cloud and On-premises (MFA Server)) X X MIM CAL + MIM Server X X Cloud App Discovery X X Connect Health X X Automatic password rollover for group accounts X X Identity Protection X Privileged Identity Management X Overview of Azure Active Directory editions Privileged Identity Management With the help of Azure AD Privileged Identity Management (PIM), the user can access various capabilities. These include the ability to view which users are Azure AD administrators and the possibility to enable administrative services on demand such as Office 365 or Intune. Furthermore, the user is able to receive reports about changes in administrator assignments or administrator access history. The AAD PIM allows the user to monitor the access in the organization. Additionally, it is possible to manage and control the access. Resources in Azure AD and services such as Office 365 or Microsoft Intune can be accessed. Lastly, the user can get alerts, if accesses to privileged roles are granted. Let's take a look at the PIM dashboard. To use Azure PIM, it needs to be activated first: Azure AD PIM should be found in the search easily: Azure AD PIM in the marketplace After clicking on Azure AD PIM in the marketplace, PIM will probably ask you to re-verify MFA for security reasons. To do this, the MFA token needs to be typed in, after clicking on Verify my identity: Re-verify identity for Azure AD PIM setup After a successful verification you will get an output as illustrated here: Successful re-verification Now the initial setup will start. The setup guides the user through the task of choosing all accounts in the tenant that have privileged rights. It is also possible to select them if they are eligible for requesting privileged role rights. If the wizard is completed without choosing any roles or user as eligible, it will by default assign the security administrator and privileged role administrator roles to the first user that does the PIM setup. Only with these roles it is possible to manage other privileged accounts and make them eligible or grant them rights: Setup wizard for Azure AD PIM In the following screenshot the tasks related to Privileged Identity Management are illustrated: Azure AD PIM main tasks As my subscription looks very boring after enabling Azure AD PIM I chose to show a demo picture from Microsoft that shows how Azure AD PIM could look in a real-world subscription: Azure AD PIM dashboard (https://docs.microsoft.com/en-us/azure/active-directory/media/active-directory-privileged-identity-management-configure/pim_dash.png) It's now possible to manage all chosen eligible privileged accounts and roles through Azure AD PIM. Besides removing and adding eligible users to Azure AD PIM, there is also a management of privileged roles, where the role activation setting is available. This setting is used to make privileged roles more transparent, trackable, and to implement the just-in-time (JIT) administration model. This is the activation setting blade for the Security Administrator: Role activation settings for the role Security Administrator It's also possible to use the rich monitoring and auditing capabilities of Azure AD PIM and never to lose track of the use of privileged accounts and to track misuse easily. Azure AD PIM is a very useful security feature and it is even more useful in combination with Azure AD Identity Protection. Identity protection Azure AD identity is a service that provides a central dashboard that informs administrators about potential threats to the organizations identities. It is based on behavioral analysis and it provides an overview of risks levels and vulnerabilities. The Azure AD anomaly detection capabilities are used by Azure AD Identity Protection to report suspicious activities. These enable one to identify anomalies of the organization's identity in real time, making it more powerful than the regular reporting capabilities. This system will calculate a user's risk level for each account, giving you the ability to configure risk-based policies in order to automatically protect the identities of your organization. Employing these risk-based policies among other conditional access controls provided by AAD and EMS enables an organization to provide remediation actions or block access to certain accounts. The key capabilities of Azure Identity Protection can be grouped into two phases. Detection of vulnerabilities and potential risky accounts This phase is basically about automatically classifying suspicious sign-in or user activity. It uses user-defined sign-in risk and user risk policies. These policies are described later. Another feature of this part is the automatic security recommendations (vulnerabilities) based on Azure provided rules. Investigation of potential suspicious events This part is all about investigating the alerts and events that are triggered by the risk policies. So basically, a security related person needs to review all users that got flagged based on policies and take a look at the particular risk events that triggered this alert and so contributed to the higher risk level. It's also important to define a workflow that is used to take the corresponding actions. It also needs someone who regularly investigates the vulnerability warnings and recommendations and estimates the real risk level for the organization. It's important to take a closer look at the risk policies that can be configured in Azure AD Identity Protection. We will skip the Multi-factor authentication registration configurations here. For more details on MFA, read the next paragraph. Just because it can't be said often enough, I highly recommend enforcing MFA for all accounts! The two policies we can configure are user risk policy and sign-in risk policy. The options look quite similar, but the real differentiation happens in the background: Sign-in risk policy In the following diagram, user risk policy view is illustrated: User risk policy The main differentiation between Sign-in and User risk policies is where the risk events are captured. The Sign-in policy defines what happens when a certain account appears to have a high number of suspicious sign-in events. This includes sign-in from an anonymous IP address, logins from different countries in a time frame where it would not be possible to travel to the other location, and a lot more. On the other hand, User risk policies trigger a certain amount of events that happen after the user was already logged in. Leaked credential or abnormal behavior are example risk events. Microsoft provides a guide to simulating some risks to verify that the corresponding policies trigger and events got recorded. This guide is provided at this address: https://docs.microsoft.com/en-us/azure/active-directory/active-directory-identityprotection-playbook. The interesting thing after choosing users is the Conditions setting. This setting defines the threshold of risk events that is required to trigger the policy. The different option for the threshold are Low and above, Medium and above, and High. When High is chosen, it needs much more risk events to trigger the policy, but it also has the lowest impact on users. When Low and above is chosen, the policy will trigger much more often, but the likelihood of false positives is much higher, too. Finding the right balance between security and usability is one more time the real challenge. The last option provides a preview of how many users will be affected by the new policy. This helps to review and identify possible misconfigurations of earlier steps. Multi-factor authentication Authentications that require more than a single type of verification method are defined as two-step verifications. This second layer of sign-in routine can critically improve the security of user transactions and sign-ins. This method can be described as employing two or more of the typical authentication factors, defined as either one, something you know (password), two, something you have (a trusted device such as a phone or a security token) or three, something you are (biometrics). Microsoft uses Azure MFA for this objective. The user's sign-in routine is kept simple, but Azure MFA improves safe access to data and applications. Several verification methods such as phone call, text message, or mobile app verification can help you to strengthen the authentication process. There are two pricing models for Azure MFA. One is based on users, the other is based on usage. Take your time and consider both models. Just go to the pricing calculator and calculate both to compare them. Now we will see how easy it is to activate MFA in Azure: First sign in to the old Azure portal (https://manage.windowsazure.com). After choosing the right Azure Active Directory, click on USERS: Azure Active Directory in the old Azure portal At the USERS page click on MANAGE MULTI-FACTOR AUTH at the very bottom: Multi-factor authentication settings Now a redirection to the MFA portal should take place. In this portal, the MFA management takes place. I will use my demo user Frederik to show the process of activating MFA: MFA portal Just choose the user that needs to be enabled for MFA and press Enable: Choosing users for MFA and enabling them is enabled for MFA After confirming the change, it takes a few seconds and the user is enabled for MFA. Do you really want to enable MFA? Search for the Multi-Factor Authentication in the search box and then click on it: Confirmation for enabling MFA To change the billing model for MFA, a new MFA provider needs to be created to replace the existing one. For this, the Multi-Factor Authentication resource should be created from the marketplace: MFA provider in the marketplace Click on Create button to proceed: Redirection at creation The little arrow in the box indicates a redirection to the old portal. In the old portal, the MFA provider page directly opens up. There is an exclamation mark next to the usage model. This and the text at the bottom warns that it's not possible to change the usage model afterwards: New MFA provider There are many more features related to Azure Active Directory and Identity Security that we are not able to discuss in this book. I encourage you to take a look at Azure AD Connect Health, Azure AD Cloud App Discovery, and Azure Information Protection (as part of EMS). It's important to know what services are offered and what advantages they could offer your company. This is an example dialogue that will be shown after typing the password when MFA is enabled and the Authenticator-App was chosen as the main method: MFA with authenticator-app Conditional access Another important security feature that is based on Azure Active Directory is conditional access. Although it's much more important when working with Office 365 it is also used to authenticate against Azure AD applications. A conditional access rule grants or denies access to a certain resource based on location, group membership, device state, and the application the user tries to access. After creating access rules that apply to all users who use the corresponding application, it's also possible to apply a rule to a security group or the other way around and exclude a group from applying. There are scenarios with MFA, where this could make sense. Currently, Conditional access is completely managed in the old Azure portal (https://manage.windowsazure.com). There is a conditional access feature in the new Azure portal, but it is still in preview and not supported for production. The administrator is also able to combine conditional access policies with Azure AD Multi-factor authentication (MFA). This will combine the MFA policies with those of other services such as Identity Protection or the basic MFA policy. This means that even if a user is per group excepted from authenticating with MFA to an application, all the other rules still apply. So if there is an MFA policy configured in Identity Protection that enforces MFA, the user still needs to log in using MFA. In the old portal the conditional access feature is configured on an application basis, in the new portal the conditional access rules are configured and managed in the Azure Active Directory resource: Per application management old Azure Portal Following screenshot shows view of new Azure AD: Central management in Azure Active Directory in the new portal To summarize, in this tutorial, we learned how to secure Azure identities from hackers. If you've enjoyed reading this post, do check out our book, Implementing Azure Solutions to manage, access, and secure your confidential data and to implement storage solutions. Azure Functions 2.0 launches with better workload support for serverless Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Automate tasks using Azure PowerShell and Azure CLI [Tutorial]
Read more
  • 0
  • 0
  • 23982

article-image-running-simple-game-using-pygame
Packt
29 Mar 2013
5 min read
Save for later

Running a simple game using Pygame

Packt
29 Mar 2013
5 min read
How to do it... Imports: First we will import the required Pygame modules. If Pygame is installed properly, we should get no errors: import pygame, sys from pygame.locals import * Initialization: We will initialize Pygame by creating a display of 400 by 300 pixels and setting the window title to Hello world: pygame.init() screen = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Hello World!') The main game loop: Games usually have a game loop, which runs forever until, for instance, a quit event occurs. In this example, we will only set a label with the text Hello world at coordinates (100, 100). The text has a font size of 19, red color, and falls back to the default font: while True: sys_font = pygame.font.SysFont("None", 19) rendered = sys_font.render('Hello World', 0, (255, 100, 100)) screen.blit(rendered, (100, 100)) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() We get the following screenshot as the end result: The following is the complete code for the Hello World example: import pygame, sys from pygame.locals import * pygame.init() screen = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Hello World!') while True: sysFont = pygame.font.SysFont("None", 19) rendered = sysFont.render('Hello World', 0, (255, 100, 100)) screen.blit(rendered, (100, 100)) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() How it works... It might not seem like much, but we learned a lot in this recipe. The functions that passed the review are summarized in the following table: Function Description pygame.init() This function performs the initialization and needs to be called before any other Pygame functions are called. pygame.display.set_mode((400, 300)) This function creates a so-called   Surface object to draw on. We give this function a tuple representing the width and height of the surface. pygame.display.set_caption('Hello World!') This function sets the window title to a specified string value. pygame.font.SysFont("None", 19) This function creates a system font from a comma-separated list of fonts (in this case none) and a font size parameter. sysFont.render('Hello World', 0, (255, 100, 100)) This function draws text on a surface. The second parameter indicates whether anti-aliasing is used. The last parameter is a tuple representing the RGB values of a color. screen.blit(rendered, (100, 100)) This function draws on a surface. pygame.event.get() This function gets a list of Event objects. Events represent some special occurrence in the system, such as a user quitting the game. pygame.quit() This function cleans up resources used by Pygame. Call this function before exiting the game. pygame.display.update() This function refreshes the surface.   Drawing with Pygame Before we start creating cool games, we need an introduction to the drawing functionality of Pygame. As we noticed in the previous section, in Pygame we draw on the Surface objects. There is a myriad of drawing options—different colors, rectangles, polygons, lines, circles, ellipses, animation, and different fonts. How to do it... The following steps will help you diverge into the different drawing options you can use with Pygame: Imports: We will need the NumPy library to randomly generate RGB values for the colors, so we will add an extra import for that: import numpy Initializing colors: Generate four tuples containing three RGB values each with NumPy: colors = numpy.random.randint(0, 255, size=(4, 3)) Then define the white color as a variable: WHITE = (255, 255, 255) Set the background color: We can make the whole screen white with the following code: screen.fill(WHITE) Drawing a circle: Draw a circle in the center with the window using the first color we generated: pygame.draw.circle(screen, colors[0], (200, 200), 25, 0) Drawing a line: To draw a line we need a start point and an end point. We will use the second random color and give the line a thickness of 3: pygame.draw.line(screen, colors[1], (0, 0), (200, 200), 3) Drawing a rectangle: When drawing a rectangle, we are required to specify a color, the coordinates of the upper-left corner of the rectangle, and its dimensions: pygame.draw.rect(screen, colors[2], (200, 0, 100, 100)) Drawing an ellipse: You might be surprised to discover that drawing an ellipse requires similar parameters as for rectangles. The parameters actually describe an imaginary rectangle that can be drawn around the ellipse: pygame.draw.ellipse(screen, colors[3], (100, 300, 100, 50), 2) The resulting window with a circle, line, rectangle, and ellipse using random colors: The code for the drawing demo is as follows: import pygame, sys from pygame.locals import * import numpy pygame.init() screen = pygame.display.set_mode((400, 400)) pygame.display.set_caption('Drawing with Pygame') colors = numpy.random.randint(0, 255, size=(4, 3)) WHITE = (255, 255, 255) #Make screen white screen.fill(WHITE) #Circle in the center of the window pygame.draw.circle(screen, colors[0], (200, 200), 25, 0) # Half diagonal from the upper-left corner to the center pygame.draw.line(screen, colors[1], (0, 0), (200, 200), 3) pygame.draw.rect(screen, colors[2], (200, 0, 100, 100)) pygame.draw.ellipse(screen, colors[3], (100, 300, 100, 50), 2) while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() Summary Here we saw how to create a basic game to get us started. The game demonstrated fonts and screen management in the time-honored tradition of Hello world examples. The next section of drawing with Pygame taught us how to draw basic shapes such as rectangles, ovals, circles, lines, and others. We also learned important information about colors and color management. Resources for Article : Further resources on this subject: Using Execnet for Parallel and Distributed Processing with NLTK [Article] TortoiseSVN: Getting Started [Article] Python 3 Object Oriented Programming: Managing objects [Article]
Read more
  • 0
  • 0
  • 23972

article-image-creating-web-page-displaying-data-sql-server-2008
Packt
23 Oct 2009
5 min read
Save for later

Creating a Web Page for Displaying Data from SQL Server 2008

Packt
23 Oct 2009
5 min read
This article by Jayaram Krishnaswamy describes how you may connect to SQL Server 2008 and display the retrieved data in a GridView Control on a web page. Trying to establish a connection to the SQL Server 2008 is not possible in Visual Studio 2008 as you will see soon in the tutorial. One way to get around this, as shown in this tutorial, is to create an ODBC connection to the SQL Server and then using the ODBC connection to retrieve the data. Visual Studio 2008 Version: 9.0.21022.8 RTM, Microsoft Windows XP Professional Media Center Edition, and SQL Server 'Katmai' were used for this tutorial. (For more resources on Microsoft, see here.) Connecting to SQL Server 2008 is Not Natively Supported in Microsoft Visual Studio 2008 Designer In the Visual Studio 2008 IDE make a right click on the Data Connections node in the Server Explorer. This will open up the Add Connection window where the default connection being displayed is MS SQL Server Compact. Click on the Change... button which opens the Change Data Source window shown in the next figure. Highlight Microsoft SQL Server as shown and click on the OK button. This once again opens the Add Connection window showing SQL Server 2008 on the machine, Hodentek as shown in the next figure in this case. The connection is set for Windows Authentication and should you test the connectivity you would get 'Success' as a reply. However when you click on the handle for the database name to retrieve a list of databases on this server, you would get a message as shown. Creating a ODBC DSN You will be using the ODBC Data Source Administrator on your desktop to create a ODBC DSN. You access the ODBC Source Administrator from Start | All Programs | Control Panel | Administrative Tools | Data Sources(ODBC). This opens up ODBC Data Source Administrator window as shown in the next figure. Click on System DSN tab and click on the Add... button. This opens up the Create New Data Source window where you scroll down to SQL Server Native Client 10.0. Click on the Finish button. This will bring up the Create a New Data Source to SQL Server window. You must provide a name in the Name box. You also provide a description and click on the drop-down handle for the question, Which SQL Server do you want to connect to? to reveal a number of accessible servers as shown. Highlight SQL Server 2008. Click on the Next button which opens a window where you provide the authentication information. This server uses windows authentication and if your server uses SQL Server authentication you will have to be ready to provide the LoginID and Password. You may accept the default for other configurable options. Click on the Next button which opens a window where you choose the default database to which you want to establish a connection. Click on the Next button which opens a window where you accept the defaults and click on the Finish button. This brings up the final screen, the ODBC Data SQL Server Setup which summarizes the options made as shown. By clicking on the Test Data Source... button you can verify the connectivity. When you click on the OK button you will be taken back to the ODBC Data Source Administrator window where the DSN you created is now added to the list of DSNs on your machine as shown. Retrieving Data from the Server to a Web Page You will be creating an ASP.NET website project. As this version of Visual Studio supports projects in different versions, choose the Net Framework 2.0 as shown. On to the Default.aspx page, drag and drop a GridView control from the Toolkit as shown in this design view. Click on the Smart task handle to reveal the tasks you need to complete this control. Click on the drop-down handle for the Choose Data Source: task as shown in the previous figure. Now click on the <New data Source...> item. This opens the Data Source Configuration Wizard window which displays the various sources from which you may get your data. Click on the Database icon. Now the OK button becomes visible. Click on the OK button. The wizard's next task is to guide you to get the connection information as in the next figure. Click on the New Connection... button. This will take you back to the Add Connection window. Click on the Change... button as shown earlier in the tutorial. In the Change Data Source window, you now highlight the Microsoft ODBC Data Source as shown in the next figure. Click on the OK button. This opens the Add Connection window where you can now point to the ODBC source you created earlier, using the drop-down handle for the Use user or system data source name. You may also test your connection by hitting the Test Connection button. Click on the OK button. This brings the connection information to the wizard's screen as shown in the next figure. Click on the Next button which opens a window in which you have the option to save your connection information to the configuration node of your web.config file. Make sure you read the information on this page. The default connection name has been changed to Conn2k8 as shown. Click on the Next button. This will bring up the screen where you provide a SQL Select statement to retrieve the columns you want. You have three options and here the Specify a custom SQL Statement or stored procedure option is chosen.
Read more
  • 0
  • 0
  • 23962
article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 23940

article-image-implementing-dependency-injection-in-spring-tutorial
Natasha Mathur
21 Aug 2018
9 min read
Save for later

Implementing Dependency Injection in Spring [Tutorial]

Natasha Mathur
21 Aug 2018
9 min read
Spring is a lightweight and open source enterprise framework created way back in 2003. Modularity is the heart of the Spring framework. Because of this, Spring can be used from the presentation layer to the persistence layer. The good thing is, Spring doesn't force you to use Spring in all layers. For example, if you use Spring in the persistence layer, you are free to use any other framework in the presentation of the controller layer. In this article we will look at implementing Dependency Injection (DI) in Spring Java application. This tutorial is an excerpt taken from the book  'Java 9 Dependency Injection', written by Krunal Patel, Nilang Patel. Spring is a POJO-based framework; a servlet container is suffice to run your application and a fully-fledged application server is not required. DI is a process of providing the dependent objects to other objects that need it. In Spring, the container supplies the dependencies. The flow of creating and managing the dependencies is inverted from client to container. That is the reason we call it an IoC container. A Spring IoC container uses the Dependency Injection (DI) mechanism to provide the dependency at runtime.  Now we'll talk about how we can implement the constructor and setter-based DI through Spring's IoC container. Implementing Constructor-based DI Constructor-based dependency is generally used where you want to pass mandatory dependencies before the object is instantiated. It's provided by a container through a constructor with different arguments, and each represents dependency. When a container starts, it checks whether any constructor-based DI is defined for <bean>. It will create the dependency objects first, and then pass them to the current object's constructor. We will understand this by taking the classic example of using logging. It is good practice to put the log statement at various places in the code to trace the flow of execution. Let's say you have an EmployeeService class where you need to put a log in each of its methods. To achieve separation of concern, you put the log functionality in a separated class called Logger. To make sure the EmployeeService and Logger are independent and loosely coupled, you need to inject the Logger object into the EmployeeService object. Let's see how to achieve this by constructor-based injection: public class EmployeeService { private Logger log; //Constructor public EmployeeService(Logger log) { this.log = log; } //Service method. public void showEmployeeName() { log.info("showEmployeeName method is called ...."); log.debug("This is Debuggin point"); log.error("Some Exception occured here ..."); } } public class Logger { public void info(String msg){ System.out.println("Logger INFO: "+msg); } public void debug(String msg){ System.out.println("Logger DEBUG: "+msg); } public void error(String msg){ System.out.println("Logger ERROR: "+msg); } } public class DIWithConstructorCheck { public static void main(String[] args) { ApplicationContext springContext = new ClassPathXmlApplicationContext("application-context.xml"); EmployeeService employeeService = (EmployeeService) springContext.getBean("employeeService"); employeeService.showEmployeeName(); } } As per the preceding code, when these objects are configured with Spring, the EmployeeService object expects the Spring container to inject the object of Logger through the constructor. To achieve this, you need to set the configuration metadata as per the following snippet: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <!-- All your bean and its configuration metadata goes here --> <bean id="employeeService" class="com.packet.spring.constructor.di.EmployeeService"> <constructor-arg ref="logger"/> </bean> <bean id="logger" class="com.packet.spring.constructor.di.Logger"> </bean> </beans> In the preceding configuration, the Logger bean is injected into the employee bean through the constructor-arg element. It has a ref attribute, which is used to point to other beans with a matching id value.  This configuration instructs Spring to pass the object of Logger into the constructor of the EmployeeService bean. You can put the <bean> definition in any order here. Spring will create the objects of <bean> based on need, and not as per the order they are defined here. For more than one constructor argument, you can pass additional <constructor-arg> elements. The order is not important as far as the object type (class attribute of referred bean) is not ambiguous. Spring also supports DI with primitive constructor arguments. Spring provides the facility to pass the primitive values in a constructor from an application context (XML) file. Let's say you want to create an object of the Camera class with a default value, as per the following snippet: public class Camera { private int resolution; private String mode; private boolean smileShot; //Constructor. public Camera(int resolution, String mode, boolean smileShot) { this.resolution = resolution; this.mode = mode; this.smileShot = smileShot; } //Public method public void showSettings() { System.out.println("Resolution:"+resolution+"px mode:"+mode+" smileShot:"+smileShot); } } The Camera class has three properties: resolution, mode, and smileShot. Its constructor takes three primitive arguments to create a camera object with default values. You need to give configuration metadata in the following way so that Spring can create instances of the Camera object with default primitive values: <bean id="camera" class="com.packet.spring.constructor.di.Camera"> <constructor-arg type="int" value="12" /> <constructor-arg type="java.lang.String" value="normal" /> <constructor-arg type="boolean" value="false" /> </bean> We pass three <constructor-arg> elements under <bean>, corresponding to each constructor argument. Since these are primitive, Spring has no idea about its type while passing the value. So, we need to explicitly pass the type attribute, which defines the type of primitive constructor argument. In case of primitive also, there is no fixed order to pass the value of the constructor argument, as long as the type is not ambiguous. In previous cases, all three types are different, so Spring intelligently picks up the right constructor argument, no matter which orders you pass them. Now we are adding one more attribute to the Camera class called flash, as per the following snippet: //Constructor. public Camera(int resolution, String mode, boolean smileShot, boolean flash) { this.resolution = resolution; this.mode = mode; this.smileShot = smileShot; this.flash = flash; } In this case, the constructor arguments smileShot and flash are of the same type (Boolean), and you pass the constructor argument value from XML configuration as per the following snippet: <constructor-arg type="java.lang.String" value="normal"/> <constructor-arg type="boolean" value="true" /> <constructor-arg type="int" value="12" /> <constructor-arg type="boolean" value="false" /> In the preceding scenario, Spring will pick up the following: int value for resolution The string value for mode First Boolean value (true) in sequence for first Boolean argument—smileShot Second Boolean value (false) in sequence for second Boolean argument—flash In short, for similar types in constructor arguments, Spring will pick the first value that comes in the sequence. So sequence does matter in this case. This may lead to logical errors, as you are passing wrong values to the right argument. To avoid such accidental mistakes, Spring provides the facility to define a zero-based index in the <constructor-arg> element, as per the following snippet: <constructor-arg type="java.lang.String" value="normal" index="1"/> <constructor-arg type="boolean" value="true" index="3"/> <constructor-arg type="int" value="12" index="0"/> <constructor-arg type="boolean" value="false" index="2"/> This is more readable and less error-prone. Now Spring will pick up the last value (with index=2) for smileShot, and the second value (with index=3) for flash arguments. Index attributes resolve the ambiguity of two constructor arguments having the same type. If the type you defined in <constructor-arg> is not compatible with the actual type of constructor argument in that index, then Spring will raise an error. So just make sure about this while using index attribute. Implementing Setter-based DI Setter-based DI is generally used for optional dependencies. In case of setter-based DI, the container first creates an instance of your bean, either by calling a no-argument constructor or static factory method. It then passes the said dependencies through each setter method. Dependencies injected through the setter method can be re-injected or changed at a later stage of application. We will understand setter-based DI with the following code base: public class DocumentBase { private DocFinder docFinder; //Setter method to inject dependency. public void setDocFinder(DocFinder docFinder) { this.docFinder = docFinder; } public void performSearch() { this.docFinder.doFind(); } } public class DocFinder { public void doFind() { System.out.println(" Finding in Document Base "); } } public class DIWithSetterCheck { public static void main(String[] args) { ApplicationContext springContext = new ClassPathXmlApplicationContext("application-context.xml"); DocumentBase docBase = (DocumentBase) springContext.getBean("docBase"); docBase.performSearch(); } } The  DocumentBase class depends on DocFinder, and we are passing it through the setter method. You need to define the configuration metadata for Spring, as per the following snippet: <bean id="docBase" class="com.packet.spring.setter.di.DocumentBase"> <property name="docFinder" ref="docFinder" /> </bean> <bean id="docFinder" class="com.packet.spring.setter.di.DocFinder"> </bean> Setter-based DI can be defined through the <property> element under <bean>. The name attribute denotes the name of the setter name. In our case, the name attribute of the property element is docFinder, so Spring will call the setDocFinder method to inject the dependency. The pattern to find the setter method is to prepend set and make the first character capital. The name attribute of the <property> element is case-sensitive. So, if you set the name to docfinder, Spring will try to call the setDocfinder method and will show an error. Just like constructor DI, Setter DI also supports supplying the value for primitives, as per the following snippet: <bean id="docBase" class="com.packet.spring.setter.di.DocumentBase"> <property name="buildNo" value="1.2.6" /> </bean> Since the setter method takes only one argument, there is no scope of argument ambiguity. Whatever value you are passing here, Spring will convert it to an actual primitive type of the setter method parameter. If it's not compatible, it will show an error. We learned to implement DI with Spring and looked at different types of DI like setter-based injection and constructor-based injection. If you found this post useful, be sure to check out the book  'Java 9 Dependency Injection' to learn about factory method in Spring and other concepts in dependency injection. Learning Dependency Injection (DI) Angular 2 Dependency Injection: A powerful design pattern
Read more
  • 0
  • 0
  • 23913

article-image-creating-hello-world-xamarinforms
Packt
06 Jan 2017
16 min read
Save for later

Creating Hello World in Xamarin.Forms_sample

Packt
06 Jan 2017
16 min read
Since the beginning of Xamarin's life as a company, their motto has always been to present the native APIs on iOS and Android idiomatically to C#. This was a great strategy in the beginning, because applications built with Xamarin.iOS or Xamarin.Android were pretty much indistinguishable from native Objective-C or Java applications. Code sharing was generally limited to non-UI code, which left a potential gap to fill in the Xamarin ecosystem: a cross-platform UI abstraction. Xamarin.Forms is the solution to this problem, a cross-platform UI framework that renders native controls on each platform. Xamarin.Forms is a great framework for those that know C# (and XAML), but also may not want to get into the full details of using the native iOS and Android APIs. In this chapter, we will do the following: Create Hello World in Xamarin.Forms Discuss the Xamarin.Forms architecture Use XAML with Xamarin.Forms Cover data binding and MVVM with Xamarin.Forms Creating Hello World in Xamarin.Forms To understand how a Xamarin.Forms application is put together, let's begin by creating a simple Hello World application. OpenXamarin Studio and perform the following steps: Create a new Multiplatform | App | Forms App project from the new solution dialog. Name your solution something appropriate, such as HelloForms. Make sure Use Portable Class Library is selected. Click Next, then click Create. Notice the three new projects that were successfully created: HelloForms HelloForms.Android HelloForms.iOS In Xamarin.Forms applications, the bulk of your code will be shared, and each platform-specific project is just a small amount of code that starts up the Xamarin.Forms framework. Let's examine theminimum parts of a Xamarin.Forms application: App.xaml and App.xaml.cs in the HelloForms PCL library -- this class is the main starting point of the Xamarin.Forms application. A simple property, MainPage, is set to the first page in the application. In the default project template, HelloFormsPage is created with a single label that will be rendered as a UILabel on iOS and a TextView on Android. MainActivity.cs in the HelloForms.Android Android project -- the main launcher activity of the Android application. The important parts for Xamarin.Forms here is the call to Forms.Init(this, bundle), which initializes the Android-specific portion of the Xamarin.Forms framework. Next is a call to LoadApplication(new App()), which starts our Xamarin.Forms application. AppDelegate.cs in the HelloForms.iOS iOS project -- very similar to Android, except iOS applications start up using a UIApplicationDelegate class. Forms.Init() will initialized the iOS-specific parts of Xamarin.Forms, and just as Android's LoadApplication(new App()), will start the Xamarin.Forms application. Go ahead and run the iOS project; you should see something similar to the following screenshot: If you run theAndroid project, you will get a UI verysimilar to the iOS one shown in the following screenshot, but using native Android controls: Even though it's not covered in this book, Xamarin.Forms also supports Windows Phone, WinRT, and UWP applications. However, a PC running Windows and Visual Studio is required to develop for Windows platforms. If you can get a Xamarin.Forms application working on iOS and Android, then getting a Windows Phone version working should be a piece of cake. Understanding the architecture behind Xamarin.Forms Getting started with Xamarin.Forms is very easy, but it is always good to look behind the scenes to understand how everything is put together. In the earlier chapters of this book, we created a cross-platform application using native iOS and Android APIs directly. Certain applications are much more suited for this development approach, so understanding the difference between a Xamarin.Forms application and a classic Xamarin application is important when choosing what framework is best suited for your app. Xamarin.Forms is an abstraction over the native iOS and Android APIs that you can call directly from C#. So, Xamarin.Forms is using the same APIs you would in a classic Xamarin application, while providing a framework that allows you to define your UIs in a cross-platform way. An abstraction layer such as this is in many ways a very good thing, because it gives you the benefit of sharing the code driving your UI as well as any backend C# code that could also have been shared in a standard Xamarin app. The main disadvantage, however, is a slight hit in performance that might make it more difficult to create a perfect, buttery-smooth experience. Xamarin.Forms gives the option of writing renderers and effects that allow you to override your UI in a platform-specific way. This gives you the ability to drop down to native controls where needed. Have a look at the differences between a Xamarin.Forms application and a traditional Xamarin app in the following diagram: In both applications, the business logic and backend code of the application can be shared, but Xamarin.Forms gives an enormous benefit by allowing your UI code to be shared as well. Additionally, Xamarin.Forms applications have two projecttemplates to choose from, so let's cover each option: Xamarin.Forms Shared: Creates a shared project with all of your Xamarin.Forms code, an iOS project, and an Android project Xamarin.Forms Portable: Creates a Portable Class Library (PCL) containing all shared Xamarin.Forms code, an iOS project, and an Android project Both options will work well for any application, in general. Shared projects are basically a collection of code files that get added automatically by another project referencing it. Using a shared project allows you to use preprocessor statements to implement platform-specific code. PCL projects, on the other hand, create a portable .NET assembly that can be used on iOS, Android, and various other platforms. PCLs can't use preprocessor statements, so you generally set up platform-specific code with interface or abstract/base classes. In most cases, I think a PCL is a better option, since it inherently encourages better programming practices. See Chapter 3, Code Sharing between iOS and Android, for details on the advantages and disadvantages of these two code-sharing techniques. Using XAML in Xamarin.Forms In addition to defining Xamarin.Forms controls from C# code, Xamarin has provided the tooling for developing your UI in Extensible Application Markup Language (XAML). XAML is a declarative language that is basically a set of XML elements that map to a certain control in the Xamarin.Forms framework. Using XAML is comparable to using HTML to define the UI on a webpage, with the exception that XAML in Xamarin.Forms is creating C# objects that represent a native UI. To understand how XAML works in Xamarin.Forms, let's create a new page with different types of Xamarin.Forms controls on it. Return to your HelloForms project from earlier, and open the HelloFormsPage.xaml file. Add the following XAML code between the <ContentPage> tags: <StackLayout Orientation="Vertical" Padding="10,20,10,10"> <Label Text="My Label" XAlign="Center" /> <Button Text="My Button" /> <Entry Text="My Entry" /> <Image Source="https://www.xamarin.com/content/images/ pages/branding/assets/xamagon.png" /> <Switch IsToggled="true" /> <Stepper Value="10" /> </StackLayout> Go ahead and run the application on iOS; your application will look something like the following screenshot: On Android, the application looks identical to iOS, except it is using native Android controls instead of the iOS counterparts: In our XAML, we created a StackLayout control, which is a container for other controls. It can lay out controls either vertically or horizontally one by one, as defined by the Orientation value. We also applied a padding of 10 around the sides and bottom, and 20 from the top to adjust for the iOS status bar. You may be familiar with this syntax for defining rectangles if you are familiar with WPF or Silverlight. Xamarin.Forms uses the same syntax of left, top, right, and bottom values, delimited by commas. We also usedseveral of the built-in Xamarin.Forms controls to see how they work: Label: We used this earlier in the chapter. Used only for displaying text, this maps to a UILabel on iOS and a TextView on Android. Button: A general purpose button that can be tapped by a user. This control maps to a UIButton on iOS and a Button on Android. Entry: This control is a single-line text entry. It maps to a UITextField on iOS and an EditText on Android. Image: This is a simple control for displaying an image on the screen, which maps to a UIImage on iOS and an ImageView on Android. We used the Source property of this control, which loads an image from a web address. Using URLs on this property is nice, but it is best for performance to include the image in your project where possible. Switch: This is an on/off switch or toggle button. It maps to a UISwitch on iOS and a Switch on Android. Stepper: This is a general-purpose input for entering numbers using two plus and minus buttons. On iOS, this maps to a UIStepper, while on Android, Xamarin.Forms implements this functionality with two buttons. These are just some of the controls provided by Xamarin.Forms. There are also more complicated controls, such as the ListView and TableView, which you would expect for delivering mobile UIs. Even though we used XAML in this example, you could also implement this Xamarin.Forms page from C#. Here is an example of what that would look like: public class UIDemoPageFromCode : ContentPage { public UIDemoPageFromCode() { var layout = new StackLayout { Orientation = StackOrientation.Vertical, Padding = new Thickness(10, 20, 10, 10), }; layout.Children.Add(new Label { Text = "My Label", XAlign = TextAlignment.Center, }); layout.Children.Add(new Button { Text = "My Button", }); layout.Children.Add(new Image { Source = "https://www.xamarin.com/content/images/pages/ branding/assets/xamagon.png", }); layout.Children.Add(new Switch { IsToggled = true, }); layout.Children.Add(new Stepper { Value = 10, }); Content = layout; } } So, you can see where using XAML can be a bit more readable, and is generally a bit better at declaring UIs than C#. However, using C# to define your UIs is still a viable, straightforward approach. Using data-binding and MVVM At this point, you should begrasping the basics of Xamarin.Forms, but are wondering how theMVVM design pattern fits into the picture. The MVVM design pattern was originally conceived for use along with XAML and the powerful data binding features XAML provides, so it is only natural that it is a perfect design pattern to be used with Xamarin.Forms. Let's cover the basics of how data-binding and MVVM is set up with Xamarin.Forms: Your Model and ViewModel layers will remain mostly unchanged from the MVVM pattern we covered earlier in the book. Your ViewModels should implement the INotifyPropertyChanged interface, which facilitates data binding. To simplify things in Xamarin.Forms, you can use the BindableObject base class and call OnPropertyChanged when values change on your ViewModels. Any Page or control in Xamarin.Forms has a BindingContext, which is the object that it is data-bound to. In general, you can set a corresponding ViewModel to each view's BindingContext property. In XAML, you can set up a data-binding by using syntax of the form Text="{Binding Name}". This example would bind the Text property of the control to a Name property of the object residing in the BindingContext. In conjunction with data binding, events can be translated to commands using the ICommand interface. So, for example, the click event of a Button can be data-bound to a command exposed by a ViewModel. There is a built-in Command class in Xamarin.Forms to support this. Data binding can also be set up with C# code in Xamarin.Forms using the Binding class. However, it is generally much easier to set up bindings with XAML, since the syntax has been simplified with XAML markup extensions. Now that we have covered the basics, let's go through step-by-step and partially convert our XamSnap sample application from earlier in the book to use Xamarin.Forms. For the most part, we can reuse most of the Model and ViewModel layers, although we will have to make a few minor changes to support data-binding with XAML. Let's begin by creating a new Xamarin.Forms application backed by a PCL, named XamSnap: First, create three folders in the XamSnap project named Views, ViewModels, and Models. Add the appropriate ViewModels and Models classes from the XamSnap application from earlier chapters; these are found in the XamSnap project. Build the project, just to make sure everything is saved. You will get a few compiler errors, which we will resolve shortly. The first class we will need to edit is the BaseViewModel class; open it and make the following changes: public class BaseViewModel : BindableObject { protected readonly IWebService service = DependencyService.Get<IWebService>(); protected readonly ISettings settings = DependencyService.Get<ISettings>(); bool isBusy = false; public bool IsBusy { get { return isBusy; } set { isBusy = value; OnPropertyChanged(); } } } First of all, we removed the calls to the ServiceContainer class, because Xamarin.Forms provides its own IoC container called the DependencyService. It functions very similarly to the container we built in the previous chapters, except it only has one method, Get<T>, and registrations are set up via an assembly attribute that we will set up shortly. Additionally, we removed the IsBusyChanged event in favor of the INotifyPropertyChanged interface that supports data binding. Inheriting from BindableObject gave us the helper method, OnPropertyChanged, which we use to inform bindings in Xamarin.Forms that the value has changed. Notice we didn't pass a string containing the property name to OnPropertyChanged. This method is using a lesser-known feature of .NET 4.0 called CallerMemberName, which will automatically fill in the calling property's name at runtime. Next, let's set up the services we need with the DependencyService. Open App.xaml.cs in the root of the PCL project and add the following two lines above the namespace declaration: [assembly: Dependency(typeof(XamSnap.FakeWebService))] [assembly: Dependency(typeof(XamSnap.FakeSettings))] The DependencyService will automatically pick up these attributes and inspect the types we declared. Any interfaces these types implement will be returned for any future callers of DependencyService.Get<T>. I normally put all Dependency declarations in the App.cs file, just so they are easy to manage and in one place. Next, let's modify LoginViewModel by adding a new property: public Command LoginCommand { get; set; } We'll use this shortly for data-binding the command of a Button. One last change in the view model layer is to set up INotifyPropertyChanged for MessageViewModel: Conversation[] conversations; public Conversation[] Conversations { get { return conversations; } set { conversations = value; OnPropertyChanged(); } } Likewise, you could repeat this pattern for the remaining public properties throughout the view model layer, but this is all we will need for this example. Next, let's create a new Forms ContentPage Xaml file named LoginPage in the Views folder. In the code-behind file, LoginPage.xaml.cs, we'll just need to make a few changes: public partial class LoginPage : ContentPage { readonly LoginViewModel loginViewModel = new LoginViewModel(); public LoginPage() { Title = "XamSnap"; BindingContext = loginViewModel; loginViewModel.LoginCommand = new Command(async () => { try { await loginViewModel.Login(); await Navigation.PushAsync(new ConversationsPage()); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } }); InitializeComponent(); } } We did a few important things here, including setting the BindingContext to our LoginViewModel. We set up the LoginCommand, which basically invokes the Login method and displays a message if something goes wrong. It also navigates to a new page if successful. We also set the Title, which will show up in the top navigation bar of the application. Next, open LoginPage.xaml and we'll add the following XAML code inside ContentPage: <StackLayout Orientation="Vertical" Padding="10,10,10,10"> <Entry Placeholder="Username" Text="{Binding UserName}" /> <Entry Placeholder="Password" Text="{Binding Password}" IsPassword="true" /> <Button Text="Login" Command="{Binding LoginCommand}" /> <ActivityIndicator IsVisible="{Binding IsBusy}" IsRunning="true" /> </StackLayout> This will set up the basics of two text fields, a button, and a spinner, complete with all the bindings to make everything work. Since we set up BindingContext from the LoginPage code-behind file, all the properties are bound to LoginViewModel. Next, create ConversationsPage as a XAML page just like before, and edit the ConversationsPage.xaml.cs code-behind file: public partial class ConversationsPage : ContentPage { readonly MessageViewModel messageViewModel = new MessageViewModel(); public ConversationsPage() { Title = "Conversations"; BindingContext = messageViewModel; InitializeComponent(); } protected async override void OnAppearing() { try { await messageViewModel.GetConversations(); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } } } In this case, we repeated a lot of the same steps. The exception is that we used the OnAppearing method as a way to load the conversations to display on the screen. Now let's add the following XAML code to ConversationsPage.xaml: <ListView ItemsSource="{Binding Conversations}"> <ListView.ItemTemplate> <DataTemplate> <TextCell Text="{Binding UserName}" /> </DataTemplate> </ListView.ItemTemplate> </ListView> In this example, we used ListView to data-bind a list of items and display on the screen. We defined a DataTemplate class, which represents a set of cells for each item in the list that the ItemsSource is data-bound to. In our case, a TextCell displaying the Username is created for each item in the Conversations list. Last but not least, we must return to the App.xaml.cs file and modify the startup page: MainPage = new NavigationPage(new LoginPage()); We used a NavigationPage here so that Xamarin.Forms can push and pop between different pages. This uses a UINavigationController on iOS, so you can see how the native APIs are being used on each platform. At this point, if youcompile and run the application, you will get afunctional iOS and Android application that can log in and view a list of conversations: Summary Xamarin.Forms In this chapter, we covered the basics of Xamarin.Forms and how it can be very useful for building your own cross-platform applications. Xamarin.Forms shines for certain types of apps, but can be limiting if you need to write more complicated UIs or take advantage of native drawing APIs. We discovered how to use XAML for declaring our Xamarin.Forms UIs and understood how Xamarin.Forms controls are rendered on each platform. We also dived into the concepts of data-binding and how to use the MVVM design pattern with Xamarin.Forms. Last but not least, we began porting the XamSnap application from earlier in the book to Xamarin.Forms, and were able to reuse a lot of our existing code. In the next chapter, we will cover the process of submitting applications to the iOS App Store and Google Play. Getting your app into the store can be a time-consuming process, but guidance from the next chapter will give you a head start.
Read more
  • 0
  • 0
  • 23875
article-image-introduction-network-security
Packt
06 Apr 2017
18 min read
Save for later

Introduction to Network Security

Packt
06 Apr 2017
18 min read
In this article by Warun Levesque, Michael McLafferty, and Arthur Salmon, the authors of the book Applied Network Security, we will be covering the following topics, which will give an introduction to network security: Murphy's law The definition of a hacker and the types The hacking process Recent events and statistics of network attacks Security for individual versus company Mitigation against threats Building an assessment This world is changing rapidly with advancing network technologies. Unfortunately, sometimes the convenience of technology can outpace its security and safety. Technologies like the Internet of things is ushering in a new era of network communication. We also want to change the mindset of being in the field of network security. Most current cyber security professionals practice defensive and passive security. They mostly focus on mitigation and forensic tactics to analyze the aftermath of an attack. We want to change this mindset to one of Offensive security. This article will give insight on how a hacker thinks and what methods they use. Having knowledge of a hacker's tactics, will give the reader a great advantage in protecting any network from attack. (For more resources related to this topic, see here.) Murphy's law Network security is much like Murphy's law in the sense that, if something can go wrong it will go wrong. To be successful at understanding and applying network security, a person must master the three Ps. The three Ps are, persistence, patience, and passion. A cyber security professional must be persistent in their pursuit of a solution to a problem. Giving up is not an option. The answer will be there; it just may take more time than expected to find it. Having patience is also an important trait to master. When dealing with network anomalies, it is very easy to get frustrated. Taking a deep breath and keeping a cool head goes a long way in finding the correct solution to your network security problems. Finally, developing a passion for cyber security is critical to being a successful network security professional. Having that passion will drive you to learn more and evolve yourself on a daily basis to be better. Once you learn, then you will improve and perhaps go on to inspire others to reach similar aspirations in cyber security. The definition of a hacker and the types A hacker is a person who uses computers to gain unauthorized access to data. There are many different types of hackers. There are white hat, grey hat, and black hat hackers. Some hackers are defined for their intention. For example, a hacker that attacks for political reasons may be known as a hacktivist. A white hat hackers have no criminal intent, but instead focuses on finding and fixing network vulnerabilities. Often companies will hire a white hat hacker to test the security of their network for vulnerabilities. A grey hat hacker is someone who may have criminal intent but not often for personal gain. Often a grey hat will seek to expose a network vulnerability without the permission from the owner of the network. A black hat hacker is purely criminal. They sole objective is personal gain. Black hat hackers take advantage of network vulnerabilities anyway they can for maximum benefit. A cyber-criminal is another type of black hat hacker, who is motivated to attack for illegal financial gain. A more basic type of hacker is known as a script kiddie. A script kiddie is a person who knows how to use basic hacking tools but doesn't understand how they work. They often lack the knowledge to launch any kind of real attack, but can still cause problems on a poorly protected network. Hackers tools There are a range of many different hacking tools. A tool like Nmap for example, is a great tool for both reconnaissance and scanning for network vulnerabilities. Some tools are grouped together to make toolkits and frameworks, such as the Social Engineering Toolkit and Metasploit framework. The Metasploit framework is one of the most versatile and supported hacking tool frameworks available. Metasploit is built around a collection of highly effective modules, such as MSFvenom and provides access to an extensive database of exploits and vulnerabilities. There are also physical hacking tools. Devices like the Rubber Ducky and Wi-Fi Pineapple are good examples. The Rubber Ducky is a usb payload injector, that automatically injects a malicious virus into the device it's plugged into. The Wi-Fi Pineapple can act as a rogue router and be used to launch man in the middle attacks. The Wi-Fi Pineapple also has a range of modules that allow it to execute multiple attack vectors. These types of tools are known as penetration testing equipment. The hacking process There are five main phases to the hacking process: Reconnaissance: The reconnaissance phase is often the most time consuming. This phase can last days, weeks, or even months sometimes depending on the target. The objective during the reconnaissance phase is to learn as much as possible about the potential target. Scanning: In this phase the hacker will scan for vulnerabilities in the network to exploit. These scans will look for weaknesses such as, open ports, open services, outdated applications (including operating systems), and the type of equipment being used on the network. Access: In this phase the hacker will use the knowledge gained in the previous phases to gain access to sensitive data or use the network to attack other targets. The objective of this phase is to have the attacker gain some level of control over other devices on the network. Maintaining access: During this phase a hacker will look at various options, such as creating a backdoor to maintain access to devices they have compromised. By creating a backdoor, a hacker can maintain a persistent attack on a network, without fear of losing access to the devices they have gained control over. Although when a backdoor is created, it increases the chance of a hacker being discovered. Backdoors are noisy and often leave a large footprint for IDS to follow. Covering your tracks: This phase is about hiding the intrusion of the network by the hacker as to not alert any IDS that may be monitoring the network. The objective of this phase is to erase any trace that an attack occurred on the network. Recent events and statistics of network attacks The news has been full of cyber-attacks in recent years. The number and scale of attacks are increasing at an alarming rate. It is important for anyone in network security to study these attacks. Staying current with this kind of information will help in defending your network from similar attacks. Since 2015, the medical and insurance industry have been heavily targeted for cyber-attacks. On May 5th, 2015 Premera Blue Cross was attacked. This attack is said to have compromised at least 11 million customer accounts containing personal data. The attack exposed customer's names, birth dates, social security numbers, phone numbers, bank account information, mailing, and e-mail addresses. Another attack that was on a larger scale, was the attack on Anthem. It is estimated that 80 million personal data records were stolen from customers, employees, and even the Chief Executive Officer of Anthem. Another more infamous cyber-attack recently was the Sony hack. This hack was a little different than the Anthem and Blue Cross attacks, because it was carried out by hacktivist instead of cyber-criminals. Even though both types of hacking are criminal, the fundamental reasoning and objectives of the attacks are quite different. The objective in the Sony attack was to disrupt and embarrass the executives at Sony as well as prevent a film from being released. No financial data was targeted. Instead the hackers went after personal e-mails of top executives. The hackers then released the e-mails to the public, causing humiliation to Sony and its executives. Many apologies were issued by Sony in the following weeks of the attack. Large commercial retailers have also been a favorite target for hackers. An attack occurred against Home Depot in September 2014. That attack was on a large scale. It is estimated that over 56 million credit cards were compromised during the Home Depot attack. A similar attack but on a smaller scale was carried out against Staples in October 2014. During this attack, over 1.4 million credit card numbers were stolen. The statistics on cyber security attacks are eye opening. It is estimated by some experts that cybercrime has a worldwide cost of 110 billion dollars a year. In a given year, over 15 million Americans will have their identified stolen through cyber-attacks, it is also estimated that 1.5 million people fall victim to cybercrime every day. These statistics are rapidly increasing and will continue to do so until more people take an active interest in network security. Our defense The baseline for preventing a potential security issues typically begins with hardening the security infrastructure, including firewalls, DMZ, and physical security platform. Entrusting only valid sources or individuals with personal data and or access to that data. That also includes being compliant with all regulations that apply to a given situation or business. Being aware of the types of breaches as well as your potential vulnerabilities. Also understanding is an individual or an organization a higher risk target for attacks. The question has to be asked, does one's organization promote security? This is done both at the personal and the business level to deter cyber-attacks? After a decade of responding to incidents and helping customers recover from and increase their resilience against breaches. Organization may already have a security training and awareness (STA) program, or other training and program could have existed. As the security and threat landscape evolves organizations and individuals need to continually evaluate practices that are required and appropriate for the data they collect, transmit, retain, and destroy. Encryption of data at rest/in storage and in transit is a fundamental security requirement and the respective failure is frequently being cited as the cause for regulatory action and lawsuits. Enforce effective password management policies. Least privilege user access (LUA) is a core security strategy component, and all accounts should run with as few privileges and access levels as possible. Conduct regular security design and code reviews including penetration tests and vulnerability scans to identify and mitigate vulnerabilities. Require e-mail authentication on all inbound and outbound mail servers to help detect malicious e-mail including spear phishing and spoofed e-mail. Continuously monitor in real-time the security of your organization's infrastructure including collecting and analyzing all network traffic, and analyzing centralized logs (including firewall, IDS/IPS, VPN, and AV) using log management tools, as well as reviewing network statistics. Identify anomalous activity, investigate, and revise your view of anomalous activity accordingly. User training would be the biggest challenge, but is arguably the most important defense. Security for individual versus company One of the fundamental questions individuals need to ask themselves, is there a difference between them the individual and an organization? Individual security is less likely due to attack service area. However, there are tools and sites on the internet that can be utilized to detect and mitigate data breaches for both.https://haveibeenpwned.com/ or http://map.norsecorp.com/ are good sites to start with. The issue is that individuals believe they are not a target because there is little to gain from attacking individuals, but in truth everyone has the ability to become a target. Wi-Fi vulnerabilities Protecting wireless networks can be very challenging at times. There are many vulnerabilities that a hacker can exploit to compromise a wireless network. One of the basic Wi-Fi vulnerabilities is broadcasting the Service Set Identifier (SSID) of your wireless network. Broadcasting the SSID makes the wireless network easier to find and target. Another vulnerability in Wi-Fi networks is using Media Access Control (MAC) addressesfor network authentication. A hacker can easily spoof or mimic a trusted MAC address to gain access to the network. Using weak encryption such as Wired Equivalent Privacy (WEP) will make your network an easy target for attack. There are many hacking tools available to crack any WEP key in under five minutes. A major physical vulnerability in wireless networks are the Access Points (AP). Sometimes APs will be placed in poor locations that can be easily accessed by a hacker. A hacker may install what is called a rogue AP. This rogue AP will monitor the network for data that a hacker can use to escalate their attack. Often this tactic is used to harvest the credentials of high ranking management personnel, to gain access to encrypted databases that contain the personal/financial data of employees and customers or both. Peer-to-peer technology can also be a vulnerability for wireless networks. A hacker may gain access to a wireless network by using a legitimate user as an accepted entry point. Not using and enforcing security policies is also a major vulnerability found in wireless networks. Using security tools like Active Directory (deployed properly) will make it harder for a hacker to gain access to a network. Hackers will often go after low hanging fruit (easy targets), so having at least some deterrence will go a long way in protecting your wireless network. Using Intrusion Detection Systems (IDS) in combination with Active Directory will immensely increase the defense of any wireless network. Although the most effective factor is, having a well-trained and informed cyber security professional watching over the network. The more a cyber security professional (threat hunter) understands the tactics of a hacker, the more effective that Threat hunter will become in discovering and neutralizing a network attack. Although there are many challenges in protecting a wireless network, with the proper planning and deployment those challenges can be overcome. Knowns and unknowns The toughest thing about an unknown risk to security is that they are unknown. Unless they are found they can stay hidden. A common practice to determine an unknown risk would be to identify all the known risks and attempt to mitigate them as best as possible. There are many sites available that can assist in this venture. The most helpful would be reports from CVE sites that identify vulnerabilities. False positives   Positive Negative True TP: correctly identified TN: correctly rejected False FP: incorrectly identified FN: incorrectly rejected As it related to detection for an analyzed event there are four situations that exist in this context, corresponding to the relation between the result of the detection for an analyzed event. In this case, each of the corresponding situations mentioned in the preceding table are outlined as follows: True positive (TP): It is when the analyzed event is correctly classified as intrusion or as harmful/malicious. For example, a network security administrator enters their credentials into the Active Directory server and is granted administrator access. True negative (TN): It is when the analyzed event is correctly classified and correctly rejected. For example, an attacker uses a port like 4444 to communicate with a victim's device. An intrusion detection system detects network traffic on the authorized port and alerts the cyber security team to this potential malicious activity. The cyber security team quickly closes the port and isolates the infected device from the network. False positive (FP): It is when the analyzed event is innocuous or otherwise clean as it relates to perspective of security, however, the system classifies it as malicious or harmful. For example, a user types their password into a website's login text field. Instead of being granted access, the user is flagged for an SQL injection attempt by input sanitation. This is often caused when input sanitation is misconfigured. False negative (FN): It is when the analyzed event is malicious but it is classified as normal/innocuous. For example, an attacker inputs an SQL injection string into a text field found on a website to gain unauthorized access to database information. The website accepts the SQL injection as normal user behavior and grants access to the attacker. As it relates to detection, having systems correctly identify the given situations in paramount. Mitigation against threats There are many threats that a network faces. New network threats are emerging all the time. As a network security, professional, it would be wise to have a good understanding of effective mitigation techniques. For example, a hacker using a packet sniffer can be mitigated by only allowing the network admin to run a network analyzer (packet sniffer) on the network. A packet sniffer can usually detect another packet sniffer on the network right away. Although, there are ways a knowledgeable hacker can disguise the packet sniffer as another piece of software. A hacker will not usually go to such lengths unless it is a highly-secured target. It is alarming that; most businesses do not properly monitor their network or even at all. It is important for any business to have a business continuity/disaster recovery plan. This plan is intended to allow a business to continue to operate and recover from a serious network attack. The most common deployment of the continuity/disaster recovery plan is after a DDoS attack. A DDoS attack could potentially cost a business or organization millions of dollars is lost revenue and productivity. One of the most effective and hardest to mitigate attacks is social engineering. All the most devastating network attacks have begun with some type of social engineering attack. One good example is the hack against Snapchat on February 26th, 2016. "Last Friday, Snapchat's payroll department was targeted by an isolated e-mail phishing scam in which a scammer impersonated our Chief Executive Officer and asked for employee payroll information," Snapchat explained in a blog post. Unfortunately, the phishing e-mail wasn't recognized for what it was — a scam — and payroll information about some current and former employees was disclosed externally. Socially engineered phishing e-mails, same as the one that affected Snapchat are common attack vectors for hackers. The one difference between phishing e-mails from a few years ago, and the ones in 2016 is, the level of social engineering hackers are putting into the e-mails. The Snapchat HR phishing e-mail, indicated a high level of reconnaissance on the Chief Executive Officer of Snapchat. This reconnaissance most likely took months. This level of detail and targeting of an individual (Chief Executive Officer) is more accurately know as a spear-phishing e-mail. Spear-phishing campaigns go after one individual (fish) compared to phishing campaigns that are more general and may be sent to millions of users (fish). It is like casting a big open net into the water and seeing what comes back. The only real way to mitigate against social engineering attacks is training and building awareness among users. By properly training the users that access the network, it will create a higher level of awareness against socially engineered attacks. Building an assessment Creating a network assessment is an important aspect of network security. A network assessment will allow for a better understanding of where vulnerabilities may be found within the network. It is important to know precisely what you are doing during a network assessment. If the assessment is done wrong, you could cause great harm to the network you are trying to protect. Before you start the network assessment, you should determine the objectives of the assessment itself. Are you trying to identify if the network has any open ports that shouldn't be? Is your objective to quantify how much traffic flows through the network at any given time or a specific time? Once you decide on the objectives of the network assessment, you will then be able to choose the type of tools you will use. Network assessment tools are often known as penetration testing tools. A person who employs these tools is known as a penetration tester or pen tester. These tools are designed to find and exploit network vulnerabilities, so that they can be fixed before a real attack occurs. That is why it is important to know what you are doing when using penetration testing tools during an assessment. Sometimes network assessments require a team. It is important to have an accurate idea of the scale of the network before you pick your team. In a large enterprise network, it can be easy to become overwhelmed by tasks to complete without enough support. Once the scale of the network assessment is complete, the next step would be to ensure you have written permission and scope from management. All parties involved in the network assessment must be clear on what can and cannot be done to the network during the assessment. After the assessment is completed, the last step is creating a report to educate concerned parties of the findings. Providing detailed information and solutions to vulnerabilities will help keep the network up to date on defense. The report will also be able to determine if there are any viruses lying dormant, waiting for the opportune time to attack the network. Network assessments should be conducted routinely and frequently to help ensure strong networksecurity. Summary In this article we covered the fundamentals of network security. It began by explaining the importance of having network security and what should be done to secure the network. It also covered the different ways physical security can be applied. The importance of having security policies in place and wireless security was discussed. This article also spoke about wireless security policies and why they are important. Resources for Article: Further resources on this subject: API and Intent-Driven Networking [article] Deploying First Server [article] Point-to-Point Networks [article]
Read more
  • 0
  • 0
  • 23871

article-image-18-people-in-tech-every-programmer-and-software-engineer-needs-to-follow-in-2019
Richard Gall
02 Jan 2019
9 min read
Save for later

18 people in tech every programmer and software engineer needs to follow in 2019

Richard Gall
02 Jan 2019
9 min read
After a tumultuous 2018 in tech, it's vital that you surround yourself with a variety of opinions and experiences in 2019 if you're to understand what the hell is going on. While there are thousands of incredible people working in tech, I've decided to make life a little easier for you by bringing together 18 of the best people from across the industry to follow on Twitter. From engineers at Microsoft and AWS, to researchers and journalists, this list is by no means comprehensive but it does give you a wide range of people that have been influential, interesting, and important in 2018. (A few of) the best people in tech on Twitter April Wensel (@aprilwensel) April Wensel is the founder of Compassionate Coding, an organization that aims to bring emotional intelligence and ethics into the tech industry. In April 2018 Wensel wrote an essay arguing that "it's time to retire RTFM" (read the fucking manual). The essay was well received by many in the tech community tired of a culture of ostensible caustic machismo and played a part in making conversations around community accessibility an important part of 2018. Watch her keynote at NodeJS Interactive: https://www.youtube.com/watch?v=HPFuHS6aPhw Liz Fong-Jones (@lizthegrey) Liz Fong-Jones is an SRE and Dev Advocate at Google Cloud Platform, but over the last couple of years she has become an important figure within tech activism. First helping to create the NeverAgain pledge in response to the election of Donald Trump in 2016, then helping to bring to light Google's fraught internal struggle over diversity, Fong-Jones has effectively laid the foundations for the mainstream appearance of tech activism in 2018. In an interview with Fast Company, Fong-Jones says she has accepted her role as a spokesperson for the movement that has emerged, but she's committed to helping to "equipping other employees to fight for change in their workplaces–whether at Google or not –so that I’m not a single point of failure." Ana Medina (@Ana_M_Medina) Ana Medina is a chaos engineer at Gremlin. Since moving to the chaos engineering platform from Uber (where she was part of the class action lawsuit against the company), Medina has played an important part in explaining what chaos engineering looks like in practice all around the world. But she is also an important voice in discussions around diversity and mental health in the tech industry - if you get a chance to her her talk, make sure you take the chance, and if you don't, you've still got Twitter... Sarah Drasner (@sarah_edo) Sarah Drasner does everything. She's a Developer Advocate at Microsoft, part of the VueJS core development team, organizer behind Concatenate (a free conference for Nigerian developers), as well as an author too. https://twitter.com/sarah_edo/status/1079400115196944384 Although Drasner specializes in front end development and JavaScript, she's a great person to follow on Twitter for her broad insights on how we learn and evolve as software developers. Do yourself a favour and follow her. Mark Imbriaco (@markimbriaco) Mark Imbriaco is the technical director at Epic Games. Given the company's truly, er, epic year thanks to Fortnite, Imbriaco can offer an insight on how one of the most important and influential technology companies on the planet are thinking. Corey Quinn (@QuinnyPig) Corey Quinn is an AWS expert. As the brain behind the Last Week in AWS newsletter and the voice behind the Screaming in the Cloud podcast (possibly the best cloud computing podcast on the planet), he is without a doubt the go-to person if you want to know what really matters in cloud. The range of guests that Quinn gets on the podcast is really impressive, and sums up his online persona: open, engaged, and always interesting. Yasmine Evjen (@YasmineEvjen) Yasmine Evjen is a Design Advocate at Google. That means that she is not only one of the minds behind Material Design, she is also someone that is helping to demonstrate the importance of human centered design around the world. As the presenter of Centered, a web series by the Google Design team about the ways human centered design is used for a range of applications. If you haven't seen it, it's well worth a watch. https://www.youtube.com/watch?v=cPBXjtpGuSA&list=PLJ21zHI2TNh-pgTlTpaW9kbnqAAVJgB0R&index=5&t=0s Suz Hinton (@noopkat) Suz Hinton works on IoT programs at Microsoft. That's interesting in itself, but when she's not developing fully connected smart homes (possibly), Hinton also streams code tutorials on Twitch (also as noopkat). Chris Short (@ChrisShort) If you want to get the lowdown on all things DevOps, you could do a lot worse than Chris Short. He boasts outstanding credentials - he's a CNCF ambassador, has experience with Red Hat and Ansible - but more importantly is the quality of his insights. A great place to begin is with DevOpsish, a newsletter Short produces, which features some really valuable discussions on the biggest issues and talking points in the field. Dan Abramov (@dan_abramov) Dan Abramov is one of the key figures behind ReactJS. Along with @sophiebits,@jordwalke, and @sebmarkbage, Abramov is quite literally helping to define front end development as we know it. If you're a JavaScript developer, or simply have any kind of passing interest in how we'll be building front ends over the next decade, he is an essential voice to have on your timeline. As you'd expect from someone that has helped put together one of the most popular JavaScript libraries in the world, Dan is very good at articulating some of the biggest challenges we face as developers and can provide useful insights on how to approach problems you might face, whether day to day or career changing. Emma Wedekind (@EmmaWedekind) As well as working at GoTo Meeting, Emma Wedekind is the founder of Coding Coach, a platform that connects developers to mentors to help them develop new skills. This experience makes Wedekind an important authority on developer learning. And at a time when deciding what to learn and how to do it can feel like such a challenging and complex process, surrounding yourself with people taking those issues seriously can be immensely valuable. Jason Lengstorf (@jlengstorf) Jason Lengstorf is a Developer Advocate at GatsbyJS (a cool project that makes it easier to build projects with React). His writing - on Twitter and elsewhere - is incredibly good at helping you discover new ways of working and approaching problems. Bridget Kromhout (@bridgetkromhout) Bridget Kromhout is another essential voice in cloud and DevOps. Currently working at Microsoft as Principal Cloud Advocate, Bridget also organizes DevOps Days and presents the Arrested DevOps podcast with Matty Stratton and Trevor Hess. Follow Bridget for her perspective on DevOps, as well as her experience in DevRel. Ryan Burgess (@burgessdryan) Netflix hasn't faced the scrutiny of many of its fellow tech giants this year, which means it's easy to forget the extent to which the company is at the cutting edge of technological innovation. This is why it's well worth following Ryan Burgess - as an engineering manager he's well placed to provide an insight on how the company is evolving from a tech perspective. His talk at Real World React on A/B testing user experiences is well worth watching: https://youtu.be/TmhJN6rdm28 Anil Dash (@anildash) Okay, so chances are you probably already follow Anil Dash - he does have half a million followers already, after all - but if you don't follow him, you most definitely should. Dash is a key figure in new media and digital culture, but he's not just another thought leader, he's someone that actually understands what it takes to actually build this stuff. As CEO of Glitch, a platform for building (and 'remixing') cool apps, he's having an impact on the way developers work and collaborate. 6 years ago, Dash wrote an essay called 'The Web We Lost'. In it, he laments how the web was becoming colonized by a handful of companies who built the key platforms on which we communicate and engage with one another online. Today, after a year of protest and controversy, Dash's argument is as salient as ever - it's one of the reasons it's vital that we listen to him. Jessie Frazelle (@jessfraz) Jessie Frazelle is a bit of a superstar. Which shouldn't really be that surprising - she's someone that seems to have a natural ability to pull things apart and put them back together again and have the most fun imaginable while doing it. Formerly part of the core Docker team, Frazelle now works at GitHub, where her knowledge and expertise is helping to develop the next Microsoft-tinged chapter in GitHub's history. I was lucky enough to see Jessie speak at ChaosConf in September - check out her talk: https://youtu.be/1hhVS4pdrrk Rachel Coldicutt (@rachelcoldicutt) Rachel Coldicutt is the CEO of Doteveryone, a think tank based in the U.K. that champions responsible tech. If you're interested in how technology interacts with other aspects of society and culture, as well as how it is impacting and being impacted by policymakers, Coldicutt is a vital person to follow. Kelsey Hightower (@kelseyhightower) Kelsey Hightower is another superstar in the tech world - when he talks, you need to listen. Hightower currently works at Google Cloud, but he spends a lot of time at conferences evangelizing for more effective cloud native development. https://twitter.com/mattrickard/status/1073285888191258624 If you're interested in anything infrastructure or cloud related, you need to follow Kelsey Hightower. Who did I miss? That's just a list of a few people in tech I think you should follow in 2019 - but who did I miss? Which accounts are essential? What podcasts and newsletters should we subscribe to?
Read more
  • 0
  • 0
  • 23868
Modal Close icon
Modal Close icon