Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-why-geospatial-analysis-and-gis-matters-more-than-ever-today
Richard Gall
18 Nov 2019
7 min read
Save for later

Why geospatial analysis and GIS matters more than ever today

Richard Gall
18 Nov 2019
7 min read
Due to the hype around big data and artificial intelligence, it can be easy to miss some of the powerful but specific ways data can be truly impactful. One of the most important areas of modern data analysis that rarely gets given its due is geospatial analysis. At a time when both the natural and human worlds are going through a period of seismic change, the ability to throw a spotlight on issues of climate and population change is as transformative as the smartest chatbot (indeed, probably much more transformative). The foundation of geospatial analysis are GIS systems. GIS, in case you’re new to the field ,is an acronym for Geographic Information System. GIS applications and tools allow you to store, manipulate, analyze, and visualize data that corresponds to different aspects of the existing environment. Central to this is topographical information, but it could also include many other aspects, from contours and slopes, the built environment, land types and bodies of water. In the context of climate and human geography it’s easy to see how this kind of data can help us see the bigger picture - quite literally - behind what’s happening in our region, across our countries, and indeed, across the whole world. The history of geospatial analysis is a testament to its power. In 1854 physician John Snow identified the source of a cholera outbreak in London by marking out the homes of victims on a map. The cluster of victims that Snow’s map revealed led him to an infected water supply. Read next: Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database How GIS and geospatial analysis is being used today While this example is, of course, incredibly low-tech, it highlights exactly why geospatial analysis and GIS tools can be so valuable. To bring us up to date, there are many more examples of how geospatial analysis is making a real impact in social and environmental issues. This article on Forbes, for example, details some of the ways in which GIS projects are helping to uncover information that offers some unique insights on the history of racism, and its continuing reality today. The list includes a map of historical lynchings occurring between 1877 and 1950, and a map by the Urban Institute that shows the reality of racial segregation in U.S. schools in the 21st century. https://twitter.com/urbaninstitute/status/504668921962577921 That’s just a small snapshot - there are a huge range of incredible GIS projects that are having a massive impact on both how we understand issues, but even on policy. That's analytics enacting real, demonstrable change. Here are a few of the different areas in which GIS is being used: How GIS can be used in agriculture GIS can be used to tackle crop diseases by identifying issues across a large area of land. It’s possible to gain a deeper insight into what can drive improvements to crop yields by looking at the geographic and environmental factors that influence successful growth. How GIS can be used in retail GIS can help provide an insight on the relationship between consumer behavior and factors such as weather and congestion. It can also be used to better understand how consumers interact with products in shops. This can influence things like store design and product placement. How GIS can be used in meteorology and climate science Without GIS, it would be impossible to properly understand and visualise rainfall around the world. GIS can also be used to make predictions about the weather. For example, identifying anomalies in patterns and trends could indicate extreme weather events. How GIS can be used in medicine and health As we saw in the example above, by identifying clusters of disease, it becomes much easier to determine the causes of certain illnesses. GIS can also help us better understand the relationship between illness and environment - like pollution and asthma. How GIS can be used for humanitarian purposes Geospatial tools can help humanitarian teams to understand patterns of violence in given areas. This can help them to better manage and distribute resources and support to where it’s needed (Map Kibera is a great example of how this can be done). GIS tools are good at helping to bridge the gap between local populations and humanitarian workers in times of crisis. For example, during the Haiti earthquake non-profit tech company Ushahidi’s product helped to collate and coordinate reports from across the island. This made it possible to align what might have otherwise been a mess of data and information. There are many, many more examples of GIS being used for both commercial and non-profit purposes. If you want an in-depth look at a huge range of examples, it’s well worth checking out this article, which features 1000 GIS projects. Although geospatial analysis can be used across many different domains, all the examples above have a trend running through them: they all help us to understand the impact of space and geography. From social mobility and academic opportunity to soil erosion, GIS and other geospatial tools are brilliant because they help us to identify relationships that we might otherwise be unable to see. GIS and geospatial analysis project ideas This is an important point if you’re not sure where to start when it comes to starting a new GIS project. Forget the data (to begin with at least) and just think about what sort of questions you’d like to answer. The list is potentially endless, but here are some questions that I thought of just off the top of my head: Are there certain parts of your region more prone to flooding? Why are certain parts of your town congested and not others? Do economically marginalised people have to travel further to receive healthcare? Does one part of your region receive more rainfall/snowfall than other parts? Are there more new buildings in one area than another? Getting this right is integral to any good analysis project. Ultimately it’s what makes the whole thing worthwhile. Read next: PostGIS 3.0.0 releases with raster support as a separate extension Where to find data for a GIS project Once you’ve decided on something you want to find out, the next part is to collect your data. This can be tricky, but there are nevertheless a massive range of free data sources you can use for your project. This web page has a comprehensive collection of datasets; while it might not have exactly what you’re looking for, it's nevertheless a good place to begin if you simply want to try something out. Conclusion: Geospatial analysis is one of the most exciting and potentially transformative fields in analytics GIS and geospatial analysis is quite literally rooted in the real world. In the maps and visualizations that we create we’re able to offer unique perspectives on history or provide practical guidance on how we should act, what we need to do. This is significant: all too often technology can feel like its divorced from reality, as if it is folded into its own world that has no connection to real people. So, be ambitious, and be bold with your next GIS project: who knows what impact it could have.
Read more
  • 0
  • 0
  • 27766

article-image-configuring-freeswitch-webrtc
Packt
21 Jul 2015
12 min read
Save for later

Configuring FreeSWITCH for WebRTC

Packt
21 Jul 2015
12 min read
In the article written by Giovanni Maruzzelli, author of FreeSWITCH 1.6 Cookbook, we learn how WebRTC is all about security and encryption. Theye are not an afterthought. They're intimately interwoven at the design level and are mandatory. For example, you cannot stream audio or video clearly (without encryption) via WebRTC. (For more resources related to this topic, see here.) Getting ready To start with this recipe, you need certificates. These are the same kind of certificates used by web servers for SSL-HTTPS. Yes, you can be your own Certification Authority and self-sign your own certificate. However, this will add considerable hassles; browsers will not recognize the certificate, and you will have to manually instruct them to make a security exception and accept it, or import your own Certification Authority chain to the browser. Also, in some mobile browsers, it is not possible to import self-signed Certification Authorities at all. The bottom line is that you can buy an SSL certificate for less than $10, and in 5 minutes. (No signatures, papers, faxes, telephone calls… nothing is required. Only a confirmation email and a few bucks are enough.) It will save you much frustration, and you'll be able to cleanly showcase your installation to others. The same reasoning applies to DNS Full Qualified Domain Names (FQDN)—certificates belonging to FQDN's. You can put your DNS names in /etc/hosts, or set up an internal DNS server, but this will not work for mobile clients and desktops outside your control. You can register a domain, point an fqdn to your machine's public IP (it can be a Linode, an AWS VM, or whatever), and buy a certificate using that fqdn as Common Name (CN). Don't try to set up the WebRTC server on your internal LAN behind the same NAT that your clients are into (again, it is possible but painful). How to do it... Once you have obtained your certificate (be sure to download the Certification Authority Chain too, and keep your Private Key; you'll need it), you must concatenate those three elements to create the needed certificates for mod_sofia to serve SIP signaling via WSS and media via SRTP/DTLS. With certificates in the right place, you can now activate ssl in Sofia. Open /usr/local/freeswitch/conf/vars.xml: As you can see, in the default configuration, both lines that feature SSL are false. Edit them both to change them to true. How it works... By default, Sofia will listen on port 7443 for WSS clients. You may want to change this port if you need your clients to traverse very restrictive firewalls. Edit /usr/local/freeswitch/conf/sip-profiles/internal.xml and change the "wss-binding" value to 443. This number, 443, is the HTTPS (SSL) port, and is almost universally open in all firewalls. Also, wss traffic is indistinguishable from https/ssl traffic, so your signaling will pass through the most advanced Deep Packet Inspection. Remember that if you use port 443 for WSS, you cannot use that same port for HTTPS, so you will need to deploy your secure web server on another machine. There's more... A few examples of such a configuration are certificates, DNS, and STUN/TURN. Generally speaking, if you set up with real DNS names, you will not need to run your own STUN server; your clients can rely on Google STUN servers. But if you need a TURN server because some of your clients need a media relay (which is because they're behind and demented NAT got UDP blocked by zealous firewalls), install on another machine rfc5766-turn-server, and have it listen on TCP ports 443 and 80. You can also put certificates with it and use TURNS on encrypted connection. The same firewall piercing properties as per signaling. SIP signaling in JavaScript with SIP.js (WebRTC client) Let's carry out the most basic interaction with a web browser audio/video through WebRTC. We'll start using SIP.js, which uses a protocol very familiar to all those who are old hands at VoIP. A web page will display a click-to-call button, and anyone can click for inquiries. That call will be answered by our company's PBX and routed to our employee extension (1010). Our employee will wait on a browser with the "answer" web page open, and will automatically be connected to the incoming call (if our employee does not answer, the call will go to their voicemail). Getting ready To use this example, download version 0.7.0 of the SIP.js JavaScript library from www.sipjs.com. We need an "anonymous" user that we can allow into our system without risks, that is, a user that can do only what we have preplanned. Create an anonymous user for click-to-call in a file named /usr/local/freeswitch/conf/directory/default/anonymous.xml : <include> <user id="anonymous">    <params>      <param name="password" value="welcome"/>    </params>    <variables>      <variable name="user_context" value="anonymous"/>      <variable name="effective_caller_id_name" value="Anonymous"/>      <variable name="effective_caller_id_number" value="666"/>      <variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>      <variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>    </variables> </user> </include> Then add the user's own dialplan to /usr/local/freeswitch/conf/dialplan/anonymous.xml: <include> <context name="anonymous">    <extension name="public_extensions">      <condition field="destination_number" expression="^(10[01][0-9])$">        <action application="transfer" data="$1 XML default"/>      </condition>    </extension>    <extension name="conferences">      <condition field="destination_number" expression="^(36d{2})$">        <action application="answer"/>        <action application="conference" data="$1-${domain_name}@video-mcu"/>      </condition>    </extension>    <extension name="echo">      <condition field="destination_number" expression="^9196$">        <action application="answer"/>        <action application="echo"/>      </condition>    </extension> </context> </include> How to do it... In a directory served by your HTPS server (for example, Apache with an SSL certificate), put all the following files. Minimal click-to-call caller client HTML (call.html): <html> <body>        <button id="startCall">Start Call</button>        <button id="endCall">End Call</button>        <br/>        <video id="remoteVideo"></video>        <br/>        <video id="localVideo" muted="muted" width="128px" height="96px"></video>        <script src="js/sip-0.7.0.min.js"></script>        <script src="call.js"></script> </body> </html> JAVASCRIPT (call.js): var session;   var endButton = document.getElementById('endCall'); endButton.addEventListener("click", function () {        session.bye();        alert("Call Ended"); }, false);   var startButton = document.getElementById('startCall'); startButton.addEventListener("click", function () {        session = userAgent.invite('sip:1010@gmaruzz.org', options);        alert("Call Started"); }, false);   var userAgent = new SIP.UA({        uri: 'anonymous@gmaruzz.org',        wsServers: ['wss://self2.gmaruzz.org:7443'],        authorizationUser: 'anonymous',        password: 'welcome' });   var options = {        media: {                constraints: {                        audio: true,                        video: true                },                render: {                        remote: document.getElementById('remoteVideo'),                        local: document.getElementById('localVideo')                }        } }; Minimal callee HTML (answer.html): <html> <body>        <button id="endCall">End Call</button>        <br/>        <video id="remoteVideo"></video>        <br/>        <video id="localVideo" muted="muted" width="128px" height="96px"></video>        <script src="js/sip-0.7.0.min.js"></script>        <script src="answer.js"></script> </body> </html> JAVASCRIPT (answer.js): var session;   var endButton = document.getElementById('endCall'); endButton.addEventListener("click", function () {        session.bye();        alert("Call Ended"); }, false);   var userAgent = new SIP.UA({        uri: '1010@gmaruzz.org',        wsServers: ['wss://self2.gmaruzz.org:7443'],        authorizationUser: '1010',        password: 'ciaociao' });   userAgent.on('invite', function (ciapalo) {        session = ciapalo;        session.accept({                media: {                        constraints: {                               audio: true,                                video: true                        },                        render: {                                remote: document.getElementById('remoteVideo'),                                local: document.getElementById('localVideo')                        }                  }        }); }); How it works... Our employee (the callee, or the person who will answer the call) will sit tight with the answer.html web page open on their browser. Upon page load, JavaScript will have created the SIP agent and registered it with our FreeSWITCH server as SIP user "1010" (just as our employee was on their own regular SIP phone). Our customer (the caller, or the person who initiates the communication) will visit the call.html webpage (while loading, this web page will register as an SIP "anonymous" user with FreeSWITCH), and then click on the Start Call button. This clicking will activate the JavaScript that creates the communication session using the invite method of the user agent, passing as an argument the SIP address of our employee. The Invite method initiates a call, and our FreeSWITCH server duly invites SIP user 1010. That happens to be the answer.html web page our employee is in front of. The Invite method sent from FreeSWITCH to answer.html will activate the JavaScript local user agent, which will create the session and accept the call. At this moment, the caller and callee are connected, and voice and video will begin to flow back and forth. The received audio or video stream will be rendered by the RemoteVideo tag in the web page, while its own stream (the video that is sent to the peer) will show up locally in the little localVideo tag. That's muted not to generate Larsen whistles. See also The Configuring a SIP phone to register with FreeSWITCH recipe in Chapter 2, Connecting Telephones and Service Providers, and the documentation at http://sipjs.com/guides/.confluence/display/FREESWITCH/mod_verto Summary This article features the new disruptive technology that allows real-time audio/video/data-secure communication from hundreds of millions of browsers. FreeSWITCH is ready to serve as a gateway and an application server. Resources for Article: Further resources on this subject: WebRTC with SIP and IMS [article] Architecture of FreeSWITCH [article] Calling your fellow agents [article]
Read more
  • 0
  • 0
  • 27763

Packt
20 Oct 2015
3 min read
Save for later

OAuth 2.0 – Gaining Consent

Packt
20 Oct 2015
3 min read
In this article by Charles Bihis, the author of the book, Mastering OAuth 2.0, discusses the topic of gaining consent in OAuth 2.0. OAuth 2.0 is a framework built around the concept of resources and permissions for protecting those resources. Central to this is the idea of gaining consent. Let's look at an example.   (For more resources related to this topic, see here.) How does it work? You have just downloaded the iPhone app GoodApp. After installing, GoodApp would like to suggest contacts for you to add by looking at your Facebook friends. Conceptually, the OAuth 2.0 workflow can be represented like this:   The following are the steps present in the OAuth 2.0 workflow: You ask GoodApp to suggest you contacts. GoodApp says, "Sure! But you'll have to authorize me first. Go here…" GoodApp sends you to Facebook to log in and authorize GoodApp. Facebook asks you directly for authorization to see if GoodApp can access your friend list on your behalf. You say yes. Facebook happily obliges, giving GoodApp your friend list. GoodApp then uses this information to tailor suggested contacts for you. The preceding image and workflow presents a rough idea for how this interaction looks like using the OAuth 2.0 model. However, of particular interest to us now are steps 3-5. In these steps, the service provider, Facebook, is asking you, the user, whether or not you allow the client application, GoodApp, to perform a particular action. This is known as user consent. User consent When a client application wants to perform a particular action relating to you or resources you own, it must first ask you for permission. In this case, the client application, GoodApp, wants to access your friend list on the service provider, Facebook. In order for Facebook to allow this, they must ask you directly. This is where the user consent screen comes in. It is simply a page that you are presented with in your application that describes the permissions that are being requested of you by the client application along with an option to either allow or reject the request. You may be familiar with these types of screens already if you've ever tried to access resources on one service from another service. For example, the following is an example of a user consent screen that is presented when you want to log into Pinterest using your Facebook credentials. Incorporating this into our flow chart, we get a new image: This flow chart includes the following steps: You ask GoodApp to suggest you contacts. GoodApp says, "Sure! But you'll have to authorize me first. Go here…" GoodApp sends you to Facebook. Here, Facebook asks you directly for authorization for GoodApp to access your friend list on your behalf. It does this by presenting the user consent form which you can either accept or deny. Let's assume you accept. Facebook happily obliges, giving GoodApp your friend list. GoodApp then uses this information to tailor suggested contacts for you. When you accept the terms on the user consent screen, you have allowed GoodApp access to your Facebook friend list on your behalf. This is a concept known as delegated authority, and it is all accomplished by gaining consent. Summary In this article, we discussed the idea of gaining consent in OAuth 2.0, and how it works with the help of an example and flow charts. Resources for Article: Further resources on this subject: Oracle API Management Implementation 12c [article] Find Friends on Facebook [article] Core Ephesoft Features [article]
Read more
  • 0
  • 0
  • 27759

article-image-fine-tune-nginx-configufine-tune-nginx-configurationfine-tune-nginx-configurationratio
Packt
14 Jul 2015
20 min read
Save for later

Fine-tune the NGINX Configuration

Packt
14 Jul 2015
20 min read
In this article by Rahul Sharma, author of the book NGINX High Performance, we will cover the following topics: NGINX configuration syntax Configuring NGINX workers Configuring NGINX I/O Configuring TCP Setting up the server (For more resources related to this topic, see here.) NGINX configuration syntax This section aims to cover it in good detail. The complete configuration file has a logical structure that is composed of directives grouped into a number of sections. A section defines the configuration for a particular NGINX module, for example, the http section defines the configuration for the ngx_http_core module. An NGINX configuration has the following syntax: Valid directives begin with a variable name and then state an argument or series of arguments separated by spaces. All valid directives end with a semicolon (;). Sections are defined with curly braces ({}). Sections can be nested in one another. The nested section defines a module valid under the particular section, for example, the gzip section under the http section. Configuration outside any section is part of the NGINX global configuration. The lines starting with the hash (#) sign are comments. Configurations can be split into multiple files, which can be grouped using the include directive. This helps in organizing code into logical components. Inclusions are processed recursively, that is, an include file can further have include statements. Spaces, tabs, and new line characters are not part of the NGINX configuration. They are not interpreted by the NGINX engine, but they help to make the configuration more readable. Thus, the complete file looks like the following code: #The configuration begins here global1 value1; #This defines a new section section { sectionvar1 value1; include file1;    subsection {    subsectionvar1 value1; } } #The section ends here global2 value2; # The configuration ends here NGINX provides the -t option, which can be used to test and verify the configuration written in the file. If the file or any of the included files contains any errors, it prints the line numbers causing the issue: $ sudo nginx -t This checks the validity of the default configuration file. If the configuration is written in a file other than the default one, use the -c option to test it. You cannot test half-baked configurations, for example, you defined a server section for your domain in a separate file. Any attempt to test such a file will throw errors. The file has to be complete in all respects. Now that we have a clear idea of the NGINX configuration syntax, we will try to play around with the default configuration. This article only aims to discuss the parts of the configuration that have an impact on performance. The NGINX catalog has large number of modules that can be configured for some purposes. This article does not try to cover all of them as the details are beyond the scope of the book. Please refer to the NGINX documentation at http://nginx.org/en/docs/ to know more about the modules. Configuring NGINX workers NGINX runs a fixed number of worker processes as per the specified configuration. In the following sections, we will work with NGINX worker parameters. These parameters are mostly part of the NGINX global context. worker_processes The worker_processes directive controls the number of workers: worker_processes 1; The default value for this is 1, that is, NGINX runs only one worker. The value should be changed to an optimal value depending on the number of cores available, disks, network subsystem, server load, and so on. As a starting point, set the value to the number of cores available. Determine the number of cores available using lscpu: $ lscpu Architecture:     x86_64 CPU op-mode(s):   32-bit, 64-bit Byte Order:     Little Endian CPU(s):       4 The same can be accomplished by greping out cpuinfo: $ cat /proc/cpuinfo | grep 'processor' | wc -l Now, set this value to the parameter: # One worker per CPU-core. worker_processes 4; Alternatively, the directive can have auto as its value. This determines the number of cores and spawns an equal number of workers. When NGINX is running with SSL, it is a good idea to have multiple workers. SSL handshake is blocking in nature and involves disk I/O. Thus, using multiple workers leads to improved performance. accept_mutex Since we have configured multiple workers in NGINX, we should also configure the flags that impact worker selection. The accept_mutex parameter available under the events section will enable each of the available workers to accept new connections one by one. By default, the flag is set to on. The following code shows this: events { accept_mutex on; } If the flag is turned to off, all of the available workers will wake up from the waiting state, but only one worker will process the connection. This results in the Thundering Herd phenomenon, which is repeated a number of times per second. The phenomenon causes reduced server performance as all the woken-up workers take up CPU time before going back to the wait state. This results in unproductive CPU cycles and nonutilized context switches. accept_mutex_delay When accept_mutex is enabled, only one worker, which has the mutex lock, accepts connections, while others wait for their turn. The accept_mutex_delay corresponds to the timeframe for which the worker would wait, and after which it tries to acquire the mutex lock and starts accepting new connections. The directive is available under the events section with a default value of 500 milliseconds. The following code shows this: events{ accept_mutex_delay 500ms; } worker_connections The next configuration to look at is worker_connections, with a default value of 512. The directive is present under the events section. The directive sets the maximum number of simultaneous connections that can be opened by a worker process. The following code shows this: events{    worker_connections 512; } Increase worker_connections to something like 1,024 to accept more simultaneous connections. The value of worker_connections does not directly translate into the number of clients that can be served simultaneously. Each browser opens a number of parallel connections to download various components that compose a web page, for example, images, scripts, and so on. Different browsers have different values for this, for example, IE works with two parallel connections while Chrome opens six connections. The number of connections also includes sockets opened with the upstream server, if any. worker_rlimit_nofile The number of simultaneous connections is limited by the number of file descriptors available on the system as each socket will open a file descriptor. If NGINX tries to open more sockets than the available file descriptors, it will lead to the Too many opened files message in the error.log. Check the number of file descriptors using ulimit: $ ulimit -n Now, increase this to a value more than worker_process * worker_connections. The value should be increased for the user that runs the worker process. Check the user directive to get the username. NGINX provides the worker_rlimit_nofile directive, which can be an alternative way of setting the available file descriptor rather modifying ulimit. Setting the directive will have a similar impact as updating ulimit for the worker user. The value of this directive overrides the ulimit value set for the user. The directive is not present by default. Set a large value to handle large simultaneous connections. The following code shows this: worker_rlimit_nofile 20960; To determine the OS limits imposed on a process, read the file /proc/$pid/limits. $pid corresponds to the PID of the process. multi_accept The multi_accept flag enables an NGINX worker to accept as many connections as possible when it gets the notification of a new connection. The purpose of this flag is to accept all connections in the listen queue at once. If the directive is disabled, a worker process will accept connections one by one. The following code shows this: events{    multi_accept on; } The directive is available under the events section with the default value off. If the server has a constant stream of incoming connections, enabling multi_accept may result in a worker accepting more connections than the number specified in worker_connections. The overflow will lead to performance loss as the previously accepted connections, part of the overflow, will not get processed. use NGINX provides several methods for connection processing. Each of the available methods allows NGINX workers to monitor multiple socket file descriptors, that is, when there is data available for reading/writing. These calls allow NGINX to process multiple socket streams without getting stuck in any one of them. The methods are platform-dependent, and the configure command, used to build NGINX, selects the most efficient method available on the platform. If we want to use other methods, they must be enabled first in NGINX. The use directive allows us to override the default method with the method specified. The directive is part of the events section: events { use select; } NGINX supports the following methods of processing connections: select: This is the standard method of processing connections. It is built automatically on platforms that lack more efficient methods. The module can be enabled or disabled using the --with-select_module or --without-select_module configuration parameter. poll: This is the standard method of processing connections. It is built automatically on platforms that lack more efficient methods. The module can be enabled or disabled using the --with-poll_module or --without-poll_module configuration parameter. kqueue: This is an efficient method of processing connections available on FreeBSD 4.1, OpenBSD 2.9+, NetBSD 2.0, and OS X. There are the additional directives kqueue_changes and kqueue_events. These directives specify the number of changes and events that NGINX will pass to the kernel. The default value for both of these is 512. The kqueue method will ignore the multi_accept directive if it has been enabled. epoll: This is an efficient method of processing connections available on Linux 2.6+. The method is similar to the FreeBSD kqueue. There is also the additional directive epoll_events. This specifies the number of events that NGINX will pass to the kernel. The default value for this is 512. /dev/poll: This is an efficient method of processing connections available on Solaris 7 11/99+, HP/UX 11.22+, IRIX 6.5.15+, and Tru64 UNIX 5.1A+. This has the additional directives, devpoll_events and devpoll_changes. The directives specify the number of changes and events that NGINX will pass to the kernel. The default value for both of these is 32. eventport: This is an efficient method of processing connections available on Solaris 10. The method requires necessary security patches to avoid kernel crash issues. rtsig: Real-time signals is a connection processing method available on Linux 2.2+. The method has some limitations. On older kernels, there is a system-wide limit of 1,024 signals. For high loads, the limit needs to be increased by setting the rtsig-max parameter. For kernel 2.6+, instead of the system-wide limit, there is a limit on the number of outstanding signals for each process. NGINX provides the worker_rlimit_sigpending parameter to modify the limit for each of the worker processes: worker_rlimit_sigpending 512; The parameter is part of the NGINX global configuration. If the queue overflows, NGINX drains the queue and uses the poll method to process the unhandled events. When the condition is back to normal, NGINX switches back to the rtsig method of connection processing. NGINX provides the rtsig_overflow_events, rtsig_overflow_test, and rtsig_overflow_threshold parameters to control how a signal queue is handled on overflows. The rtsig_overflow_events parameter defines the number of events passed to poll. The rtsig_overflow_test parameter defines the number of events handled by poll, after which NGINX will drain the queue. Before draining the signal queue, NGINX will look up how much it is filled. If the factor is larger than the specified rtsig_overflow_threshold, it will drain the queue. The rtsig method requires accept_mutex to be set. The method also enables the multi_accept parameter. Configuring NGINX I/O NGINX can also take advantage of the Sendfile and direct I/O options available in the kernel. In the following sections, we will try to configure parameters available for disk I/O. Sendfile When a file is transferred by an application, the kernel first buffers the data and then sends the data to the application buffers. The application, in turn, sends the data to the destination. The Sendfile method is an improved method of data transfer, in which data is copied between file descriptors within the OS kernel space, that is, without transferring data to the application buffers. This results in improved utilization of the operating system's resources. The method can be enabled using the sendfile directive. The directive is available for the http, server, and location sections. http{ sendfile on; } The flag is set to off by default. Direct I/O The OS kernel usually tries to optimize and cache any read/write requests. Since the data is cached within the kernel, any subsequent read request to the same place will be much faster because there's no need to read the information from slow disks. Direct I/O is a feature of the filesystem where reads and writes go directly from the applications to the disk, thus bypassing all OS caches. This results in better utilization of CPU cycles and improved cache effectiveness. The method is used in places where the data has a poor hit ratio. Such data does not need to be in any cache and can be loaded when required. It can be used to serve large files. The directio directive enables the feature. The directive is available for the http, server, and location sections: location /video/ { directio 4m; } Any file with size more than that specified in the directive will be loaded by direct I/O. The parameter is disabled by default. The use of direct I/O to serve a request will automatically disable Sendfile for the particular request. Direct I/O depends on the block size while doing a data transfer. NGINX has the directio_alignment directive to set the block size. The directive is present under the http, server, and location sections: location /video/ { directio 4m; directio_alignment 512; } The default value of 512 bytes works well for all boxes unless it is running a Linux implementation of XFS. In such a case, the size should be increased to 4 KB. Asynchronous I/O Asynchronous I/O allows a process to initiate I/O operations without having to block or wait for it to complete. The aio directive is available under the http, server, and location sections of an NGINX configuration. Depending on the section, the parameter will perform asynchronous I/O for the matching requests. The parameter works on Linux kernel 2.6.22+ and FreeBSD 4.3. The following code shows this: location /data { aio on; } By default, the parameter is set to off. On Linux, aio needs to be enabled with directio, while on FreeBSD, sendfile needs to be disabled for aio to take effect. If NGINX has not been configured with the --with-file-aio module, any use of the aio directive will cause the unknown directive aio error. The directive has a special value of threads, which enables multithreading for send and read operations. The multithreading support is only available on the Linux platform and can only be used with the epoll, kqueue, or eventport methods of processing requests. In order to use the threads value, configure multithreading in the NGINX binary using the --with-threads option. Post this, add a thread pool in the NGINX global context using the thread_pool directive. Use the same pool in the aio configuration: thread_pool io_pool threads=16; http{ ….....    location /data{      sendfile   on;      aio       threads=io_pool;    } } Mixing them up The three directives can be mixed together to achieve different objectives on different platforms. The following configuration will use sendfile for files with size smaller than what is specified in directio. Files served by directio will be read using asynchronous I/O: location /archived-data/{ sendfile on; aio on; directio 4m; } The aio directive has a sendfile value, which is available only on the FreeBSD platform. The value can be used to perform Sendfile in an asynchronous manner: location /archived-data/{ sendfile on; aio sendfile; } NGINX invokes the sendfile() system call, which returns with no data in the memory. Post this, NGINX initiates data transfer in an asynchronous manner. Configuring TCP HTTP is an application-based protocol, which uses TCP as the transport layer. In TCP, data is transferred in the form of blocks known as TCP packets. NGINX provides directives to alter the behavior of the underlying TCP stack. These parameters alter flags for an individual socket connection. TCP_NODELAY TCP/IP networks have the "small packet" problem, where single-character messages can cause network congestion on a highly loaded network. Such packets are 41 bytes in size, where 40 bytes are for the TCP header and 1 byte has useful information. These small packets have huge overhead, around 4000 percent and can saturate a network. John Nagle solved the problem (Nagle's algorithm) by not sending the small packets immediately. All such packets are collected for some amount of time and then sent in one go as a single packet. This results in improved efficiency of the underlying network. Thus, a typical TCP/IP stack waits for up to 200 milliseconds before sending the data packages to the client. It is important to note that the problem exists with applications such as Telnet, where each keystroke is sent over wire. The problem is not relevant to a web server, which severs static files. The files will mostly form full TCP packets, which can be sent immediately instead of waiting for 200 milliseconds. The TCP_NODELAY option can be used while opening a socket to disable Nagle's buffering algorithm and send the data as soon as it is available. NGINX provides the tcp_nodelay directive to enable this option. The directive is available under the http, server, and location sections of an NGINX configuration: http{ tcp_nodelay on; } The directive is enabled by default. NGINX use tcp_nodelay for connections with the keep-alive mode. TCP_CORK As an alternative to Nagle's algorithm, Linux provides the TCP_CORK option. The option tells the TCP stack to append packets and send them when they are full or when the application instructs to send the packet by explicitly removing TCP_CORK. This results in an optimal amount of data packets being sent and, thus, improves the efficiency of the network. The TCP_CORK option is available as the TCP_NOPUSH flag on FreeBSD and Mac OS. NGINX provides the tcp_nopush directive to enable TCP_CORK over the connection socket. The directive is available under the http, server, and location sections of an NGINX configuration: http{ tcp_nopush on; } The directive is disabled by default. NGINX uses tcp_nopush for requests served with sendfile. Setting them up The two directives discussed previously do mutually exclusive things; the former makes sure that the network latency is reduced, while the latter tries to optimize the data packets sent. An application should set both of these options to get efficient data transfer. Enabling tcp_nopush along with sendfile makes sure that while transferring a file, the kernel creates the maximum amount of full TCP packets before sending them over wire. The last packet(s) can be partial TCP packets, which could end up waiting with TCP_CORK being enabled. NGINX make sure it removes TCP_CORK to send these packets. Since tcp_nodelay is also set then, these packets are immediately sent over the network, that is, without any delay. Setting up the server The following configuration sums up all the changes proposed in the preceding sections: worker_processes 3; worker_rlimit_nofile 8000;   events { multi_accept on; use epoll; worker_connections 1024; }   http { sendfile on; aio on; directio 4m; tcp_nopush on; tcp_nodelay on; # Rest Nginx configuration removed for brevity } It is assumed that NGINX runs on a quad core server. Thus three worker processes have been spanned to take advantage of three out of four available cores and leaving one core for other processes. Each of the workers has been configured to work with 1,024 connections. Correspondingly, the nofile limit has been increased to 8,000. By default, all worker processes operate with mutex; thus, the flag has not been set. Each worker processes multiple connections in one go using the epoll method. In the http section, NGINX has been configured to serve files larger than 4 MB using direct I/O, while efficiently buffering smaller files using Sendfile. TCP options have also been set up to efficiently utilize the available network. Measuring gains It is time to test the changes and make sure that they have given performance gain. Run a series of tests using Siege/JMeter to get new performance numbers. The tests should be performed with the same configuration to get a comparable output: $ siege -b -c 790 -r 50 -q http://192.168.2.100/hello   Transactions:               79000 hits Availability:               100.00 % Elapsed time:               24.25 secs Data transferred:           12.54 MB Response time:             0.20 secs Transaction rate:           3257.73 trans/sec Throughput:                 0.52 MB/sec Concurrency:               660.70 Successful transactions:   39500 Failed transactions:       0 Longest transaction:       3.45 Shortest transaction:       0.00 The results from Siege should be evaluated and compared to the baseline. Throughput: The transaction rate defines this as 3250 requests/second Error rate: Availability is reported as 100 percent; thus; the error rate is 0 percent Response time: The results shows a response time of 0.20 seconds Thus, these new numbers demonstrate performance improvement in various respects. After the server configuration is updated with all the changes, reperform all tests with increased numbers. The aim should be to determine the new baseline numbers for the updated configuration. Summary The article started with an overview of the NGINX configuration syntax. Going further, we discussed worker_connections and the related parameters. These allow you to take advantage of the available hardware. The article also talked about the different event processing mechanisms available on different platforms. The configuration discussed helped in processing more requests, thus improving the overall throughput. NGINX is primarily a web server; thus, it has to serve all kinds static content. Large files can take advantage of direct I/O, while smaller content can take advantage of Sendfile. The different disk modes make sure that we have an optimal configuration to serve the content. In the TCP stack, we discussed the flags available to alter the default behavior of TCP sockets. The tcp_nodelay directive helps in improving latency. The tcp_nopush directive can help in efficiently delivering the content. Both these flags lead to improved response time. In the last part of the article, we applied all the changes to our server and then did performance tests to determine the effectiveness of the changes done. In the next article, we will try to configure buffers, timeouts, and compression to improve the utilization of the available network. Resources for Article: Further resources on this subject: Using Nginx as a Reverse Proxy [article] Nginx proxy module [article] Introduction to nginx [article]
Read more
  • 0
  • 0
  • 27721

article-image-ireport-netbeans
Packt
19 Mar 2010
3 min read
Save for later

iReport in NetBeans

Packt
19 Mar 2010
3 min read
Creating different types of reports inside the NetBeans IDE The first step is to download the NetBeans IDE and the iReport plugin for this. The iReport plugin for NetBeans is available for free download at the following locations: https://sourceforge.net/projects/ireport/files or http://plugins.netbeans.org/PluginPortal/faces/PluginDetailPage.jsp?pluginid=4425 After downloading the plugin, follow the listed steps to install the plugin in NetBeans: Start the NetBeans IDE. Go to Tools | Plugins. Select the Downloaded tab. Press Add Plugins…. Select the plugin files. For iReport 3.7.0 the plugins are: iReport-3.7.0.nbm, jasperreports-components-plugin-3.7.0.nbm, jasperreportsextensions-plugin-3.7.0.nbm, and jasperserver-plugin-3.7.0.nbm. After opening the plugin files you will see the following screen: Check the Install checkbox of ireport-designer, and press the Install button at the bottom of the window. The following screen will appear: Press Next >, and accept the terms of the License Agreement. If the Verify Certificate dialog box appears, click Continue. Press Install, and wait for the installer to complete the installation. After the installation is done, press Finish and close the Plugins dialog. If the IDE requests for a restart, then do it. Now the IDE is ready for creating reports. Creating reports We have already learnt about creating various types of reports, such as reports without parameters, reports with parameters, reports with variables, subreports, crosstab reports, reports with charts and images, and so on. We have also attained knowledge associated with these types of reports. Now, we will learn quickly how to create these reports using NetBeans with the help of the installed iReport plugins. Creating a NetBeans database JDBC connection The first step is to create a database connection, which will be used by the report data sources. Follow the listed steps: Select the Services tab from the left side of the project window. Select Databases. Right-click on Databases, and press New Connection…. In the New Database Connection dialog, set the following under Basic setting, and check the Remember password checkbox: Option Value Driver Name MySQL (Connector/J Driver) Host localhost Port 3306 Database inventory User Name root Password packt Press OK. Now the connection is created, and you can see this under the Services | Databases section, as shown in the following screenshot: Creating a report data source The NetBeans database JDBC connection created previously will be used by a report data source that will be used by the report. Follow the listed steps to create the data source: From the NetBeans toolbar, press the Report Datasources button. You will see the following dialog box: Press New. Select NetBeans Database JDBC connection, and press Next >. Enter inventory in the Name field, and from the Connection drop-down list, select jdbc:mysql://localhost:3306/inventory [root on Default schema]. Press Test, and if the connection is successful, press Save and close the Connections / Datasources dialog box.
Read more
  • 0
  • 0
  • 27718

article-image-containers-and-python-are-in-demand-but-blockchain-is-all-hype-says-skill-up-developer-survey
Richard Gall
05 Jun 2019
4 min read
Save for later

Containers and Python are in demand, but Blockchain is all hype, says Skill Up developer survey

Richard Gall
05 Jun 2019
4 min read
For the last 4 years at Packt we've been running Skill Up - a survey that aims to capture everything that's important to the developer world when it comes to work and learning. Today we've published the results of our 2019 survey. In it, you'll find a wealth of insights based on data from more than 4,500 respondents from 118 countries. Key findings in Packt's 2019 developer survey Over the next few weeks we'll be doing a deeper dive into some of the issues raised. But before we get started, below are are some of the key findings and takeaways from this year's report. Some confirm assumptions about the tech industry that have been around for some time while others might actually surprise you... Python remains the most in-demand programming language This one wasn't that surprising - Python's popularity has come through in every Skill Up since 2015. But what was interesting is that this year's findings were able to illustrate that Python's popularity isn't confined to a specific group - across age groups, salary bands and even developers using different primary programming languages, Python is regarded as a vital part of the software engineers toolkit. Containerization is impacting the way all developers work We know containers are popular. Docker has been a core part of the engineering landscape for the last half a decade or so. But this year's Skill Up survey not only confirms that fact, it also highlights that the influence of containerization is far-reaching. Read next: 6 signs you need containers This could well indicate that the gap between development and deployment is getting smaller, with developers today more likely than ever to be accountable for how their code actually runs in production. As one respondent told us, "I want to become more well-rounded, and I believe enhancing my DevOps arsenal is a great way to start." Not everyone is using cloud Cloud is a big change for the software industry. But we should be cautious about overestimating the extent to which it is actually being used by developers - in this year's survey 47% of respondents said they don't use any cloud platforms. Perhaps we shouldn't be that surprised - many respondents are working in areas like government and healthcare that require strict discipline when it comes to privacy and data protection and are (not unrelatedly) known for being a little slow to adopt emerging technology trends. Similarly, the growth of the PaaS market means that many developers and other technology professionals are using cloud based products alongside their work, rather than developing in a way that is strictly 'cloud-native'. Almost half of all developers spend time learning every day Learning is an essential part of what it means to be a developer. In this year's survey we saw what that means in practice with around 50% of respondents telling us that they spend time learning every single day. A further 30% also said they spend time at least once a week learning something. This leaves us wondering - what the hell is everyone else doing if they're not learning? As the graph above highlights, those in the lowest and highest salary bands are most likely to spend time learning every day. Java is the programming language developers are most likely to regret learning When we asked respondents what tools they regret learning, many said they didn't regret anything. However, for those that do have regrets, Java was the tool that was mentioned the most. There are a numbert of reasons for this, but Oracle’s decision to focus on enterprise Java and withdrawing support for OpenJDK is undoubtedly important in creating a degree of uncertainty around the language. Among those that said they regret learning Java there is a sense that the language is simply going out of date. One respondent called it "the COBOL of modern programming." Blockchain is over hyped and failing to deliver on expectations It has long been suspected that Blockchain is being overhyped - and now we can confirm that feeling across developers, with 38% saying it has failed to deliver against expectations over the last 12 months.   One respondent told us that they "couldn’t get any gigs despite building blockchain apps" suggesting that despite capitals' apparent hunger for all things Blockchain, the market isn't quite as big as the hype-merchants would have us believe. We'll be throwing the spotlight on these issues and many more over the next few weeks. So make sure you check the Packt Hub for more insights and updates. In the meantime, you can read the report in full by downloading it here.
Read more
  • 0
  • 0
  • 27702
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-putting-your-database-heart-azure-solutions
Packt
28 Oct 2015
19 min read
Save for later

Putting Your Database at the Heart of Azure Solutions

Packt
28 Oct 2015
19 min read
In this article by Riccardo Becker, author of the book Learning Azure DocumentDB, we will see how to build a real scenario around an Internet of Things scenario. This scenario will build a basic Internet of Things platform that can help to accelerate building your own. In this article, we will cover the following: Have a look at a fictitious scenario Learn how to combine Azure components with DocumentDB Demonstrate how to migrate data to DocumentDB (For more resources related to this topic, see here.) Introducing an Internet of Things scenario Before we start exploring different capabilities to support a real-life scenario, we will briefly explain the scenario we will use throughout this article. IoT, Inc. IoT, Inc. is a fictitious start-up company that is planning to build solutions in the Internet of Things domain. The first solution they will build is a registration hub, where IoT devices can be registered. These devices can be diverse, ranging from home automation devices up to devices that control traffic lights and street lights. The main use case for this solution is offering the capability for devices to register themselves against a hub. The hub will be built with DocumentDB as its core component and some Web API to expose this functionality. Before devices can register themselves, they need to be whitelisted in order to prevent malicious devices to start registering. In the following screenshot, we see the high-level design of the registration requirement: The first version of the solution contains the following components: A Web API containing methods to whitelist, register, unregister, and suspend devices DocumentDB, containing all the device information including information regarding other Microsoft Azure resources Event Hub, a Microsoft Azure asset that enables scalable publish-subscribe mechanism to ingress and egress millions of events per second Power BI, Microsoft’s online offering to expose reporting capabilities and the ability to share reports Obviously, we will focus on the core of the solution which is DocumentDB but it is nice to touch some of the Azure components, as well to see how well they co-operate and how easy it is to set up a demonstration for IoT scenarios. The devices on the left-hand side are chosen randomly and will be mimicked by an emulator written in C#. The Web API will expose the functionality required to let devices register themselves at the solution and start sending data afterwards (which will be ingested to the Event Hub and reported using Power BI). Technical requirements To be able to service potentially millions of devices, it is necessary that registration request from a device is being stored in a separate collection based on the country where the device is located or manufactured. Every device is being modeled in the same way, whereas additional metadata can be provided upon registration or afterwards when updating. To achieve country-based partitioning, we will create a custom PartitionResolver to achieve this goal. To extend the basic security model, we reduce the amount of sensitive information in our configuration files. Enhance searching capabilities because we want to service multiple types of devices each with their own metadata and device-specific information. Querying on all the information is desired to support full-text search and enable users to quickly search and find their devices. Designing the model Every device is being modeled similar to be able to service multiple types of devices. The device model contains at least the deviceid and a location. Furthermore, the device model contains a dictionary where additional device properties can be stored. The next code snippet shows the device model: [JsonProperty("id")]         public string DeviceId { get; set; }         [JsonProperty("location")]         public Point Location { get; set; }         //practically store any metadata information for this device         [JsonProperty("metadata")]         public IDictionary<string, object> MetaData { get; set; } The Location property is of type Microsoft.Azure.Documents.Spatial.Point because we want to run spatial queries later on in this section, for example, getting all the devices within 10 kilometers of a building. Building a custom partition resolver To meet the first technical requirement (partition data based on the country), we need to build a custom partition resolver. To be able to build one, we need to implement the IPartitionResolver interface and add some logic. The resolver will take the Location property of the device model and retrieves the country that corresponds with the latitude and longitude provided upon registration. In the following code snippet, you see the full implementation of the GeographyPartitionResolver class: public class GeographyPartitionResolver : IPartitionResolver     {         private readonly DocumentClient _client;         private readonly BingMapsHelper _helper;         private readonly Database _database;           public GeographyPartitionResolver(DocumentClient client, Database database)         {             _client = client;             _database = database;             _helper = new BingMapsHelper();         }         public object GetPartitionKey(object document)         {             //get the country for this document             //document should be of type DeviceModel             if (document.GetType() == typeof(DeviceModel))             {                 //get the Location and translate to country                 var country = _helper.GetCountryByLatitudeLongitude(                     (document as DeviceModel).Location.Position.Latitude,                     (document as DeviceModel).Location.Position.Longitude);                 return country;             }             return String.Empty;         }           public string ResolveForCreate(object partitionKey)         {             //get the country for this partitionkey             //check if there is a collection for the country found             var countryCollection = _client.CreateDocumentCollectionQuery(database.SelfLink).            ToList().Where(cl => cl.Id.Equals(partitionKey.ToString())).FirstOrDefault();             if (null == countryCollection)             {                 countryCollection = new DocumentCollection { Id = partitionKey.ToString() };                 countryCollection =                     _client.CreateDocumentCollectionAsync(_database.SelfLink, countryCollection).Result;             }             return countryCollection.SelfLink;         }           /// <summary>         /// Returns a list of collectionlinks for the designated partitionkey (one per country)         /// </summary>         /// <param name="partitionKey"></param>         /// <returns></returns>         public IEnumerable<string> ResolveForRead(object partitionKey)         {             var countryCollection = _client.CreateDocumentCollectionQuery(_database.SelfLink).             ToList().Where(cl => cl.Id.Equals(partitionKey.ToString())).FirstOrDefault();               return new List<string>             {                 countryCollection.SelfLink             };         }     } In order to have the DocumentDB client use this custom PartitionResolver, we need to assign it. The code is as follows: GeographyPartitionResolver resolver = new GeographyPartitionResolver(docDbClient, _database);   docDbClient.PartitionResolvers[_database.SelfLink] = resolver; //Adding a typical device and have the resolver sort out what //country is involved and whether or not the collection already //exists (and create a collection for the country if needed), use //the next code snippet. var deviceInAmsterdam = new DeviceModel             {                 DeviceId = Guid.NewGuid().ToString(),                 Location = new Point(4.8951679, 52.3702157)             };   Document modelAmsDocument = docDbClient.CreateDocumentAsync(_database.SelfLink,                 deviceInAmsterdam).Result;             //get all the devices in Amsterdam            var doc = docDbClient.CreateDocumentQuery<DeviceModel>(                 _database.SelfLink, null, resolver.GetPartitionKey(deviceInAmsterdam)); Now that we have created a country-based PartitionResolver, we can start working on the Web API that exposes the registration method. Building the Web API A Web API is an online service that can be used by any clients running any framework that supports the HTTP programming stack. Currently, REST is a way of interacting with APIs so that we will build a REST API. Building a good API should aim for platform independence. A well-designed API should also be able to extend and evolve without affecting existing clients. First, we need to whitelist the devices that should be able to register themselves against our device registry. The whitelist should at least contain a device ID, a unique identifier for a device that is used to match during the whitelisting process. A good candidate for a device ID is the mac address of the device or some random GUID. Registering a device The registration Web API contains a POST method that does the actual registration. First, it creates access to an Event Hub (not explained here) and stores the credentials needed inside the DocumentDB document. The document is then created inside the designated collection (based on the location). To learn more about Event Hubs, please visit https://azure.microsoft.com/en-us/services/event-hubs/.  [Route("api/registration")]         [HttpPost]         public async Task<IHttpActionResult> Post([FromBody]DeviceModel value)         {             //add the device to the designated documentDB collection (based on country)             try             { var serviceUri = ServiceBusEnvironment.CreateServiceUri("sb", serviceBusNamespace,                     String.Format("{0}/publishers/{1}", "telemetry", value.DeviceId))                     .ToString()                     .Trim('/');                 var sasToken = SharedAccessSignatureTokenProvider.GetSharedAccessSignature(EventHubKeyName,                     EventHubKey, serviceUri, TimeSpan.FromDays(365 * 100)); // hundred years will do                 //this token can be used by the device to send telemetry                 //this token and the eventhub name will be saved with the metadata of the document to be saved to DocumentDB                 value.MetaData.Add("Namespace", serviceBusNamespace);                 value.MetaData.Add("EventHubName", "telemetry");                 value.MetaData.Add("EventHubToken", sasToken);                 var document = await docDbClient.CreateDocumentAsync(_database.SelfLink, value);                 return Created(document.ContentLocation, value);            }             catch (Exception ex)             {                 return InternalServerError(ex);             }         } After this registration call, the right credentials on the Event Hub have been created for this specific device. The device is now able to ingress data to the Event Hub and have consumers like Power BI consume the data and present it. Event Hubs is a highly scalable publish-subscribe event ingestor. It can collect millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Once collected into Event Hubs, you can transform and store the data by using any real-time analytics provider or with batching/storage adapters. At the time of writing, Microsoft announced the release of Azure IoT Suite and IoT Hubs. These solutions offer internet of things capabilities as a service and are well-suited to build our scenario as well. Increasing searching We have seen how to query our documents and retrieve the information we need. For this approach, we need to understand the DocumentDB SQL language. Microsoft has an online offering that enables full-text search called Azure Search service. This feature enables us to perform full-text searches and it also includes search behaviours similar to search engines. We could also benefit from so called type-ahead query suggestions based on the input of a user. Imagine a search box on our IoT Inc. portal that offers free text searching while the user types and search for devices that include any of the search terms on the fly. Azure Search runs on Azure; therefore, it is scalable and can easily be upgraded to offer more search and storage capacity. Azure Search stores all your data inside an index, offering full-text search capabilities on your data. Setting up Azure Search Setting up Azure Search is pretty straightforward and can be done by using the REST API it offers or on the Azure portal. We will set up the Azure Search service through the portal and later on, we will utilize the REST API to start configuring our search service. We set up the Azure Search service through the Azure portal (http://portal.azure.com). Find the Search service and fill out some information. In the following screenshot, we can see how we have created the free tier for Azure Search: You can see that we use the Free tier for this scenario and that there are no datasources configured yet. We will do that know by using the REST API. We will use the REST API, since it offers more insight on how the whole concept works. We use Fiddler to create a new datasource inside our search environment. The following screenshot shows how to use Fiddler to create a datasource and add a DocumentDB collection: In the Composer window of Fiddler, you can see we need to POST a payload to the Search service we created earlier. The Api-Key is mandatory and also set the content type to be JSON. Inside the body of the request, the connection information to our DocumentDB environment is need and the collection we want to add (in this case, Netherlands). Now that we have added the collection, it is time to create an Azure Search index. Again, we use Fiddler for this purpose. Since we use the free tier of Azure Search, we can only add five indexes at most. For this scenario, we add an index on ID (device ID), location, and metadata. At the time of writing, Azure Search does not support complex types. Note that the metadata node is represented as a collection of strings. We could check in the portal to see if the creation of the index was successful. Go to the Search blade and select the Search service we have just created. You can check the indexes part to see whether the index was actually created. The next step is creating an indexer. An indexer connects the index with the provided data source. Creating this indexer takes some time. You can check in the portal if the indexing process was successful. We actually find that documents are part of the index now. If your indexer needs to process thousands of documents, it might take some time for the indexing process to finish. You can check the progress of the indexer using the REST API again. https://iotinc.search.windows.net/indexers/deviceindexer/status?api-version=2015-02-28 Using this REST call returns the result of the indexing process and indicates if it is still running and also shows if there are any errors. Errors could be caused by documents that do not have the id property available. The final step involves testing to check whether the indexing works. We will search for a device ID, as shown in the next screenshot: In the Inspector tab, we can check for the results. It actually returns the correct document also containing the location field. The metadata is missing because complex JSON is not supported (yet) at the time of writing. Indexing complex JSON types is not supported yet. It is possible to add SQL queries to the data source. We could explicitly add a SELECT statement to surface the properties of the complex JSON we have like metadata or the Point property. Try adding additional queries to your data source to enable querying complex JSON types. Now that we have created an Azure Search service that indexes our DocumentDB collection(s), we can build a nice query-as-you-type field on our portal. Try this yourself. Enhancing security Microsoft Azure offers a capability to move your secrets away from your application towards Azure Key Vault. Azure Key Vault helps to protect cryptographic keys, secrets, and other information you want to store in a safe place outside your application boundaries (connectionstring are also good candidates). Key Vault can help us to protect the DocumentDB URI and its key. DocumentDB has no (in-place) encryption feature at the time of writing, although a lot of people already asked for it to be on the roadmap. Creating and configuring Key Vault Before we can use Key Vault, we need to create and configure it first. The easiest way to achieve this is by using PowerShell cmdlets. Please visit https://msdn.microsoft.com/en-us/mt173057.aspx to read more about PowerShell. The following PowerShell cmdlets demonstrate how to set up and configure a Key Vault: Command Description Get-AzureSubscription This command will prompt you to log in using your Microsoft Account. It returns a list of all Azure subscriptions that are available to you. Select-AzureSubscription -SubscriptionName "Windows Azure MSDN Premium" This tells PowerShell to use this subscription as being subject to our next steps. Switch-AzureMode AzureResourceManager New-AzureResourceGroup –Name 'IoTIncResourceGroup' –Location 'West Europe' This creates a new Azure Resource Group with a name and a location. New-AzureKeyVault -VaultName 'IoTIncKeyVault' -ResourceGroupName 'IoTIncResourceGroup' -Location 'West Europe' This creates a new Key Vault inside the resource group and provide a name and location. $secretvalue = ConvertTo-SecureString '<DOCUMENTDB KEY>' -AsPlainText –Force This creates a security string for my DocumentDB key. $secret = Set-AzureKeyVaultSecret -VaultName 'IoTIncKeyVault' -Name 'DocumentDBKey' -SecretValue $secretvalue This creates a key named DocumentDBKey into the vault and assigns it the secret value we have just received. Set-AzureKeyVaultAccessPolicy -VaultName 'IoTIncKeyVault' -ServicePrincipalName <SPN> -PermissionsToKeys decrypt,sign This configures the application with the Service Principal Name <SPN> to get the appropriate rights to decrypt and sign Set-AzureKeyVaultAccessPolicy -VaultName 'IoTIncKeyVault' -ServicePrincipalName <SPN> -PermissionsToSecrets Get This configures the application with SPN to also be able to get a key. Key Vault must be used together with Azure Active Directory to work. The SPN we need in the steps for powershell is actually is a client ID of an application I have set up in my Azure Active Directory. Please visit https://azure.microsoft.com/nl-nl/documentation/articles/active-directory-integrating-applications/ to see how you can create an application. Make sure to copy the client ID (which is retrievable afterwards) and the key (which is not retrievable afterwards). We use these two pieces of information to take the next step. Using Key Vault from ASP.NET In order to use the Key Vault we have created in the previous section, we need to install some NuGet packages into our solution and/or projects: Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory -Version 2.16.204221202   Install-Package Microsoft.Azure.KeyVault These two packages enable us to use AD and Key Vault from our ASP.NET application. The next step is to add some configuration information to our web.config file: <add key="ClientId" value="<CLIENTID OF THE APP CREATED IN AD" />     <add key="ClientSecret" value="<THE SECRET FROM AZURE AD PORTAL>" />       <!-- SecretUri is the URI for the secret in Azure Key Vault -->     <add key="SecretUri" value="https://iotinckeyvault.vault.azure.net:443/secrets/DocumentDBKey" /> If you deploy the ASP.NET application to Azure, you could even configure these settings from the Azure portal itself, completely removing this from the web.config file. This technique adds an additional ring of security around your application. The following code snippet shows how to use AD and Key Vault inside the registration functionality of our scenario: //no more keys in code or .config files. Just a appid, secret and the unique URL to our key (SecretUri). When deploying to Azure we could             //even skip this by setting appid and clientsecret in the Azure Portal.             var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(Utils.GetToken));             var sec = kv.GetSecretAsync(WebConfigurationManager.AppSettings["SecretUri"]).Result.Value; The Utils.GetToken method is shown next. This method retrieves an access token from AD by supplying the ClientId and the secret. Since we configured Key Vault to allow this application to get the keys, the call to GetSecretAsync() will succeed. The code is as follows: public async static Task<string> GetToken(string authority, string resource, string scope)         {             var authContext = new AuthenticationContext(authority);             ClientCredential clientCred = new ClientCredential(WebConfigurationManager.AppSettings["ClientId"],                         WebConfigurationManager.AppSettings["ClientSecret"]);             AuthenticationResult result = await authContext.AcquireTokenAsync(resource, clientCred);               if (result == null)                 throw new InvalidOperationException("Failed to obtain the JWT token");             return result.AccessToken;         } Instead of storing the key to DocumentDB somewhere in code or in the web.config file, it is now moved away to Key Vault. We could do the same with the URI to our DocumentDB and with other sensitive information as well (for example, storage account keys or connection strings). Encrypting sensitive data The documents we created in the previous section contains sensitive data like namespaces, Event Hub names, and tokens. We could also use Key Vault to encrypt those specific values to enhance our security. In case someone gets hold of a document containing the device information, he is still unable to mimic this device since the keys are encrypted. Try to use Key Vault to encrypt the sensitive information that is stored in DocumentDB before it is saved in there. Migrating data This section discusses how to use a tool to migrate data from an existing data source to DocumentDB. For this scenario, we assume that we already have a large datastore containing existing devices and their registration information (Event Hub connection information). In this section, we will see how to migrate an existing data store to our new DocumentDB environment. We use the DocumentDB Data Migration Tool for this. You can download this tool from the Microsoft Download Center (http://www.microsoft.com/en-us/download/details.aspx?id=46436) or from GitHub if you want to check the code. The tool is intuitive and enables us to migrate from several datasources: JSON files MongoDB SQL Server CSV files Azure Table storage Amazon DynamoDB HBase DocumentDB collections To demonstrate the use, we migrate our existing Netherlands collection to our United Kingdom collection. Start the tool and enter the right connection string to our DocumentDB database. We do this for both our source and target information in the tool. The connection strings you need to provide should look like this: AccountEndpoint=https://<YOURDOCDBURL>;AccountKey=<ACCOUNTKEY>;Database=<NAMEOFDATABASE>. You can click on the Verify button to make sure these are correct. In the Source Information field, we provide the Netherlands as being the source to pull data from. In the Target Information field, we specify the United Kingdom as the target. In the following screenshot, you can see how these settings are provided in the migration tool for the source information: The following screenshot shows the settings for the target information: It is also possible to migrate data to a collection that is not created yet. The migration tool can do this if you enter a collection name that is not available inside your database. You also need to select the pricing tier. Optionally, setting the partition key could help to distribute your documents based on this key across all collections you add in this screen. This information is sufficient to run our example. Go to the Summary tab and verify the information you entered. Press Import to start the migration process. We can verify a successful import on the Import results pane. This example is a simple migration scenario but the tool is also capable of using complex queries to only migrate those documents that need to moved or migrated. Try migrating data from an Azure Table storage table to DocumentDB by using this tool. Summary In this article, we saw how to integrate DocumentDB with other Microsoft Azure features. We discussed how to setup the Azure Search service and how create an index to our collection. We also covered how to use the Azure Search feature to enable full-text search on our documents which could enable users to query while typing. Next, we saw how to add additional security to our scenario by using Key Vault. We also discussed how to create and configure Key Vault by using PowerShell cmdlets, and we saw how to enable our ASP.NET scenario application to make use of the Key Vault .NET SDK. Then, we discussed how to retrieve the sensitive information from Key Vault instead of configuration files. Finally, we saw how to migrate an existing data source to our collection by using the DocumentDB Data Migration Tool. Resources for Article: Further resources on this subject: Microsoft Azure – Developing Web API For Mobile Apps [article] Introduction To Microsoft Azure Cloud Services [article] Security In Microsoft Azure [article]
Read more
  • 0
  • 0
  • 27672

article-image-windows-drive-acquisition
Packt
21 Jul 2017
13 min read
Save for later

Windows Drive Acquisition

Packt
21 Jul 2017
13 min read
In this article, by Oleg Skulkin and Scar de Courcier, authors of Windows Forensics Cookbook, we  will cover drive acquisition in E01 format with FTK Imager, drive acquisition in RAW Format with DC3DD, and mounting forensic images with Arsenal Image Mounter. (For more resources related to this topic, see here.) Before you can begin analysing evidence from a source, it first of all, needs to be imaged. This describes a forensic process in which an exact copy of a drive is taken. This is an important step, especially if evidence needs to be taken to court because forensic investigators must be able to demonstrate that they have not altered the evidence in any way. The term forensic image can refer to either a physical or a logical image. Physical images are precise replicas of the drive they are referencing, whereas a logical image is a copy of a certain volume within that drive. In general, logical images show what the machine’s user will have seen and dealt with, whereas physical images give a more comprehensive overview of how the device works at a higher level. A hash value is generated to verify the authenticity of the acquired image. Hash values are essentially cryptographic digital fingerprints which show whether a particular item is an exact copy of another. Altering even the smallest bit of data will generate a completely new hash value, thus demonstrating that the two items are not the same. When a forensic investigator images a drive, they should generate a hash value for both the original drive and the acquired image. Some pieces of forensic software will do this for you. There are a number of tools available for imaging hard drives, some of which are free and open source. However, the most popular way for forensic analysts to image hard drives is by using one of the more well-known forensic software vendors solutions. This is because it is imperative to be able to explain how the image was acquired and its integrity, especially if you are working on a case that will be taken to court. Once you have your image, you will then be able to analyse the digital evidence from a device without directly interfering with the device itself. In this chapter, we will be looking at various tools that can help you to image a Windows drive, and taking you through the process of acquisition. Drive acquisition in E01 format with FTK Imager FTK Imager is an imaging and data preview tool by AccessData, which allows an examiner not only to create forensic images in different formats, including RAW, SMART, E01 and AFF, but also to preview data sources in a forensically sound manner. In the first recipe of this article, we will show you how to create a forensic image of a hard drive from a Windows system in E01 format. E01 or EnCase's Evidence File is a standard format for forensic images in law enforcement. Such images consist of a header with case info, including acquisition date and time, examiner's name, acquisition notes, and password (optional), bit-by-bit copy of an acquired drive (consists of data blocks, each is verified with its own CRC or Cyclical Redundancy Check), and a footer with MD5 hash for the bitstream.  Getting ready First of all, let's download FTK Imager from AccessData website. To do it, go to SOLUTIONS tab, and after - to Product Downloads. Now choose DIGITAL FORENSICS, and after - FTK Imager. At the time of this writing, the most up-to-date version is 3.4.3, so click DOWNLOAD PAGE green button on the right. Ok, now you should be at the download page. Click on DOWNLOAD NOW button and fill in the form, after this you'll get the download link to the email you provided. The installation process is quite straightforward, all you need is just click Next a few times, so we won't cover it in the recipe. How to do it... There are two ways of initiating drive imaging process: Using Create Disk Image button from the Toolbar as shown in the following figure: Create Disk Image button on the Toolbar Use Create Disk Image option from the File menu as shown in the following figure: Create Disk Image... option in the File Menu You can choose any option you like. The first window you see is Select Source. Here you have five options: Physical Drive: This allows you to choose a physical drive as the source, with all partitions and unallocated space Logical Drive: This allows you to choose a logical drive as the source, for example, E: drive Image File: This allows you to choose an image file as the source, for example, if you need to convert you forensic image from one format to another Contents of a Folder: This allows you to choose a folder as the source, of course, no deleted files, and so on will be included Fernico Device: This allows you to restore images from multiple CD/DVD Of course, we want to image the whole drive to be able to work with deleted data and unallocated space, so: Let's choose Physical Drive option. Evidence source mustn't be altered in any way, so make sure you are using a hardware write blocker, you can use the one from Tableau, for example. These devices allow acquisition of  drive contents without creating the possibility of modifying the data.  FTK Imager Select Source window Click Next and you'll see the next window - Select Drive. Now you should choose the source drive from the drop down menu, in our case it's .PHYSICALDRIVE2. FTK Imager Select Drive window Ok, the source drive is chosen, click Finish. Next window - Create Image. We'll get back to this window soon, but for now, just click Add...  It's time to choose the destination image type. As we decided to create our image in EnCase's Evidence File format, let's choose E01. FTK Imager Select Image Type window Click Next and you'll see Evidence Item Information window. Here we have five fields to fill in: Case Number, Evidence Number, Unique Description, Examiner and Notes. All fields are optional. FTK Imager Evidence Item Information window Filled the fields or not, click Next. Now choose image destination. You can use Browse button for it. Also, you should fill in image filename. If you want your forensic image to be split, choose fragment size (in megabytes). E01 format supports compression, so if you want to reduce the image size, you can use this feature, as you can see in the following figure, we have chosen 6. And if you want the data in the image to be secured, use AD Encryption feature. AD Encryption is a whole image encryption, so not only is the raw data encrypted, but also any metadata. Each segment or file of the image is encrypted with a randomly generated image key using AES-256. FTK Imager Select Image Destination window Ok, we are almost done. Click Finish and you'll see Create Image window again. Now, look at three options at the bottom of the window. The verification process is very important, so make sure Verify images after they are created option is ticked, it helps you to be sure that the source and the image are equal. Precalculate Progress Statistics option is also very useful: it will show you estimated time of arrival during the imaging process. The last option will create directory listings of all files in the image for you, but of course, it takes time, so use it only if you need it.  FTK Imager Create Image window All you need to do now is to click Start. Great, the imaging process has been started! When the image is created, the verification process starts. Finally, you'll get Drive/Image Verify Results window, like the one in the following figure: FTK Imager Drive/Image Verify Results window As you can see, in our case the source and the image are identical: both hashes matched. In the folder with the image, you will also find an info file with valuable information such as drive model, serial number, source data size, sector count, MD5 and SHA1 checksums, and so on. How it works... FTK Imager uses the physical drive of your choice as the source and creates a bit-by-bit image of it in EnCase's Evidence File format. During the verification process, MD5 and SHA1 hashes of the image and the source are being compared. See more FTK Imager download page: http://accessdata.com/product-download/digital-forensics/ftk-imager-version-3.4.3 FTK Imager User Guide: https://ad-pdf.s3.amazonaws.com/Imager/3_4_3/FTKImager_UG.pdf Drive acquisition in RAW format with DC3DD DC3DD is a patched (by Jesse Kornblum) version of classic GNU DD utility with some computer forensics features. For example, the fly hashing with a number of algorithms, such as MD5, SHA-1, SHA-256, and SHA-512, showing the progress of the acquisition process, and so on. Getting ready You can find a compiled stand alone 64 bit version of DC3DD for Windows at Sourceforge. Just download the ZIP or 7z archive, unpack it, and you are ready to go. How to do it... Open Windows Command Prompt and change directory (you can use cd command to do it) to the one with dc3dd.exe, and type the following command: dc3dd.exe if=.PHYSICALDRIVE2 of=X:147-2017.dd hash=sha256 log=X:147-2017.log Press Enter and the acquisition process will start. Of course, your command will be a bit different, so let's find out what each part of it means: if: It stands for input file, yes, originally DD is a Linux utility, and, if you don't know, everything is a file in Linux, as you can see in our command, we put physical drive 2 here (this is the drive we wanted to image, but in your case it can be another drive, depend on the number of drives connected to your workstation). of: It stands for output file, here you should type the destination of your image, as you remember, in RAW format, in our case it's X: drive and 147-2017.dd file. hash: As it's already been said, DC3DD supports four hashing algorithms: MD5, SHA-1, SHA-256, and SHA-512, we chose SHA-256, but you can choose the one you like. log: Here you should type the destination for the logs, you will find the image version, image hash, and so on in this file after acquisition is completed. How it works... DC3DD creates bit-by-bit image of the source drive n RAW format, so the size of the image will be the same as source, and calculates the image hash using the algorithm of the examiner's choice, in our case SHA-256. See also DC3DD download page: https://sourceforge.net/projects/dc3dd/files/dc3dd/7.2%20-%20Windows/ Mounting forensic images with Arsenal Image Mounter Arsenal Image Mounter is an open source tool developed by Arsenal Recon. It can help a digital forensic examiner to mount a forensic image or virtual machine disk in Windows. It supports both E01 (and Ex01) and RAW forensic images, so you can use it with any of the images we created in the previous recipes. It's very important to note, that Arsenal Image Mounter mounts the contents of disk images as complete disks. The tool supports all file systems you can find on Windows drives: NTFS, ReFS, FAT32 and exFAT. Also, it has temporary write support for images and it's very useful feature, for example, if you want to boot system from the image you are examining. Getting ready Go to Arsenal Image Mounter web page at Arsenal Recon website and click on Download button to download the ZIP archive. At the time of this writing the latest version of the tool is 2.0.010, so in our case, the archive has the name  Arsenal_Image_Mounter_v2.0.010.0_x64.zip. Extract it to a location of your choice and you are ready to go, no installation is needed. How to do it... There two ways to choose an image for mounting in Arsenal Image Mounter: You can use File menu and choose Mount image. Use the Mount image button as shown in the following figure:  Arsenal Image Mounter main window When you choose Mount image option from File menu or click on Mount image button, Open window will popup - here you should choose an image for mounting. The next windows you will see - Mount options, like the one in the following figure:  Arsenal Image Mounter Mount options window As you can see, there are a few options here: Read only: If you choose this option, the image is mounted in read-only mode, so no write operations are allowed (Do you still remember that you mustn't alter the evidence in any way? Of course, it's already an image, not the original drive, but nevertheless).Fake disk signatures: If an all-zero disk signature is found on the image, Arsenal Image Mounter reports a random disk signature to Windows, so it's mounted properly. Write temporary: If you choose this option, the image is mounted in read-write mode, but all modifications are written not in the original image file, but to a temporary differential file. Write original: Again, this option mounts the image in read-write mode, but this time the original image file will be modified. Sector size: This option allows you to choose sector size. Create "removable" disk device: This option emulates the attachment of a USB thumb drive.   Choose the options you think you need and click OK. We decided to mount our image as read only. Now you can see a hard drive icon on the main windows of the tool - the image is mounted. If you mounted only one image and want to unmount it- select the image and click on Remove selected button. If you have a few mounted images and want to unmount all of them - click on Remove all button. How it works... Arsenal Image Mounter mounts forensic images or virtual machine disks as complete disks in read-only or read-write mode. Later, a digital forensics examiner can access their contents even with Windows Explorer. See also Arsenal Image Mounter page at Arsenal Recon website: https://arsenalrecon.com/apps/image-mounter/ Summary In this article, the author has explained about the process and importance of drive acquisition using imaging software's which are available with well-known forensic software vendors such as FTK Imager and DC3DD. Drive acquisition being the first step in the analysis of digital evidence, need to be carried out with utmost care which in turn will make the analysis process smooth. Resources for Article: Further resources on this subject: Forensics Recovery [article] Digital and Mobile Forensics [article] Mobile Forensics and Its Challanges [article]
Read more
  • 0
  • 0
  • 27664

article-image-node-js-13-releases-upgraded-v8-full-icu-support-stable-worker-threads-api
Fatema Patrawala
23 Oct 2019
4 min read
Save for later

Node.js 13 releases with an upgraded V8, full ICU support, stable Worker Threads API and more

Fatema Patrawala
23 Oct 2019
4 min read
Yesterday was a super exciting day for Node.js developers as Node.js foundation announced of Node.js 12 transitions to Long Term Support (LTS) with the release of Node.js 13. As per the team, Node.js 12 becomes the newest LTS release along with version 10 and 8. This release marks the transition of Node.js 12.x into LTS with the codename 'Erbium'. The 12.x release line now moves into "Active LTS" and will remain so until October 2020. Then it will move into "Maintenance" until the end of life in April 2022. The new Node.js 13 release will deliver faster startup and better default heap limits. It includes updates to V8, TLS and llhttp and new features like diagnostic report, bundled heap dump capability and updates to Worker Threads, N-API, and more. Key features in Node.js 13 Let us take a look at the key features included in Node.js 13. V8 gets an upgrade to V8 7.8 This release is compatible with the new version V8 7.8. This new version of the V8 JavaScript engine brings performance tweaks and improvements to keep Node.js up with the ongoing improvements in the language and runtime. Full ICU enabled by default in Node.js 13 As of Node.js 13, full-icu is now available as default, which means hundreds of other local languages are now supported out of the box. This will simplify development and deployment of applications for non-English deployments. Stable workers API Worker Threads API is now a stable feature in both Node.js 12 and Node.js 13. While Node.js already performs well with the single-threaded event loop, there are some use-cases where additional threads can be leveraged for better results. New compiler and platform support Node.js and V8 continue to embrace newer C++ features and take advantage of newer compiler optimizations and security enhancements. With the release of Node.js 13, the codebase will now require a minimum of version 10 for the OS X development tools and version 7.2 of the AIX operating system. In addition to this there has been progress on supporting Python 3 for building Node.js applications. Systems that have Python 2 and Python 3 installed will still be able to use Python 2, however, systems with only Python 3 should now be able to build using Python 3. Developers discuss pain points in Node.js 13 On Hacker News, users discuss various pain-points in Node.js 13 and some of the functionalities missing in this release. One of the users commented, “To save you the clicks: Node.js 13 doesn't support top-level await. Node includes V8 7.8, released Sep 27. Top-level await merged into V8 on Sep 24, but didn't make it in time for the 7.8 release.” Response on this comment came in from V8 team, they say, “TLA is only in modules. Once node supports modules, it will also have TLA. We're also pushing out a version with 7.9 fairly soonish.” Other users discussed how Node.js performs with TypeScript, “I've been using node with typescript and it's amazing. VERY productive. The key thing is you can do a large refactoring without breaking anything. The biggest challenge I have right now is actually the tooling. Intellij tends to break sometimes. I'm using lerna for a monorepo with sub-modules and it's buggy with regular npm. For example 'npm audit' doesn't work. I might have to migrate to yarn…” If you are interested to know more about this release, check out the official Node.js blog post as well as the GitHub page for release notes. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more Google is planning to bring Node.js support to Fuchsia
Read more
  • 0
  • 0
  • 27650

article-image-extension-functions-in-kotlin
Aaron Lazar
08 Jun 2018
8 min read
Save for later

Extension functions in Kotlin: everything you need to know

Aaron Lazar
08 Jun 2018
8 min read
Kotlin is a rapidly rising programming language. It offers developers the simplicity and effectiveness to develop robust and lightweight applications. Kotlin offers great functional programming support, and one of the best features of Kotlin in this respect are extension functions, hands down! Extension functions are great, because they let you modify existing types with new functions. This is especially useful when you're working with Android and you want to add extra functions to the framework classes. In this article, we'll see what Extension functions are and how the're a blessing in disguise! This article has been extracted from the book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. The book bridges the language gap for Kotlin developers by showing you how to create and consume functional constructs in Kotlin. fun String.sendToConsole() = println(this) fun main(args: Array<String>) { "Hello world! (from an extension function)".sendToConsole() } To add an extension function to an existing type, you must write the function's name next to the type's name, joined by a dot (.). In our example, we add an extension function (sendToConsole()) to the String type. Inside the function's body, this refers the instance of String type (in this extension function, string is the receiver type). Apart from the dot (.) and this, extension functions have the same syntax rules and features as a normal function. Indeed, behind the scenes, an extension function is a normal function whose first parameter is a value of the receiver type. So, our sendToConsole() extension function is equivalent to the next code: fun sendToConsole(string: String) = println(string) sendToConsole("Hello world! (from a normal function)") So, in reality, we aren't modifying a type with new functions. Extension functions are a very elegant way to write utility functions, easy to write, very fun to use, and nice to read—a win-win. This also means that extension functions have one restriction—they can't access private members of this, in contrast with a proper member function that can access everything inside the instance: class Human(private val name: String) fun Human.speak(): String = "${this.name} makes a noise" //Cannot access 'name': it is private in 'Human' Invoking an extension function is the same as a normal function—with an instance of the receiver type (that will be referenced as this inside the extension), invoke the function by name. Extension functions and inheritance There is a big difference between member functions and extension functions when we talk about inheritance. The open class Canine has a subclass, Dog. A standalone function, printSpeak, receives a parameter of type Canine and prints the content of the result of the function speak(): String: open class Canine { open fun speak() = "<generic canine noise>" } class Dog : Canine() { override fun speak() = "woof!!" } fun printSpeak(canine: Canine) { println(canine.speak()) } Open classes with open methods (member functions) can be extended and alter their behavior. Invoking the speak function will act differently depending on which type is your instance. The printSpeak function can be invoked with any instance of a class that is-a Canine, either Canine itself or any subclass: printSpeak(Canine()) printSpeak(Dog()) If we execute this code, we can see this on the console: Although both are Canine, the behavior of speak is different in both cases, as the subclass overrides the parent implementation. But with extension functions, many things are different. As with the previous example, Feline is an open class extended by the Cat class. But speak is now an extension function: open class Feline fun Feline.speak() = "<generic feline noise>" class Cat : Feline() fun Cat.speak() = "meow!!" fun printSpeak(feline: Feline) { println(feline.speak()) } Extension functions don't need to be marked as override, because we aren't overriding anything: printSpeak(Feline()) printSpeak(Cat() If we execute this code, we can see this on the console: In this case, both invocations produce the same result. Although in the beginning, it seems confusing, once you analyze what is happening, it becomes clear. We're invoking the Feline.speak() function twice; this is because each parameter that we pass is a Feline to the printSpeak(Feline) function: open class Primate(val name: String) fun Primate.speak() = "$name: <generic primate noise>" open class GiantApe(name: String) : Primate(name) fun GiantApe.speak() = "${this.name} :<scary 100db roar>" fun printSpeak(primate: Primate) { println(primate.speak()) } printSpeak(Primate("Koko")) printSpeak(GiantApe("Kong")) If we execute this code, we can see this on the console: In this case, it is still the same behavior as with the previous examples, but using the right value for name. Speaking of which, we can reference name with name and this.name; both are valid. Extension functions as members Extension functions can be declared as members of a class. An instance of a class with extension functions declared is called the dispatch receiver. The Caregiver open class internally defines, extension functions for two different classes, Feline and Primate: open class Caregiver(val name: String) { open fun Feline.react() = "PURRR!!!" fun Primate.react() = "*$name plays with ${this@Caregiver.name}*" fun takeCare(feline: Feline) { println("Feline reacts: ${feline.react()}") } fun takeCare(primate: Primate){ println("Primate reacts: ${primate.react()}") } } Both extension functions are meant to be used inside an instance of Caregiver. Indeed, it is a good practice to mark member extension functions as private, if they aren't open. In the case of Primate.react(), we are using the name value from Primate and the name value from Caregiver. To access members with a name conflict, the extension receiver (this) takes precedence and to access members of the dispatcher receiver, the qualified this syntax must be used. Other members of the dispatcher receiver that don't have a name conflict can be used without qualified this. Don't get confused by the various means of this that we have already covered: Inside a class, this means the instance of that class Inside an extension function, this means the instance of the receiver type like the first parameter in our utility function with a nice syntax: class Dispatcher { val dispatcher: Dispatcher = this fun Int.extension(){ val receiver: Int = this val dispatcher: Dispatcher = this@Dispatcher } } Going back to our Zoo example, we instantiate a Caregiver, a Cat, and a Primate, and we invoke the function Caregiver.takeCare with both animal instances: val adam = Caregiver("Adam") val fulgencio = Cat() val koko = Primate("Koko") adam.takeCare(fulgencio) adam.takeCare(koko) If we execute this code, we can see this on the console: Any zoo needs a veterinary surgeon. The class Vet extends Caregiver: open class Vet(name: String): Caregiver(name) { override fun Feline.react() = "*runs away from $name*" } We override the Feline.react() function with a different implementation. We are also using the Vet class's name directly, as the Feline class doesn't have a property name: val brenda = Vet("Brenda") listOf(adam, brenda).forEach { caregiver -> println("${caregiver.javaClass.simpleName} ${caregiver.name}") caregiver.takeCare(fulgencio) caregiver.takeCare(koko) } After which, we get the following output: Extension functions with conflicting names What happens when an extension function has the same name as a member function? The Worker class has a function work(): String and a private function rest(): String. We also have two extension functions with the same signature, work and rest: class Worker { fun work() = "*working hard*" private fun rest() = "*resting*" } fun Worker.work() = "*not working so hard*" fun <T> Worker.work(t:T) = "*working on $t*" fun Worker.rest() = "*playing video games*" Having extension functions with the same signature isn't a compilation error, but a warning: Extension is shadowed by a member: public final fun work(): String It is legal to declare a function with the same signature as a member function, but the member function always takes precedence, therefore, the extension function is never invoked. This behavior changes when the member function is private, in this case, the extension function takes precedence. It is also possible to overload an existing member function with an extension function: val worker = Worker() println(worker.work()) println(worker.work("refactoring")) println(worker.rest()) On execution, work() invokes the member function and work(String) and rest() are extension functions: Extension functions for objects In Kotlin, objects are a type, therefore they can have functions, including extension functions (among other things, such as extending interfaces and others). We can add a buildBridge extension function to the object, Builder: object Builder { } fun Builder.buildBridge() = "A shinny new bridge" We can include companion objects. The class Designer has two inner objects, the companion object and Desk object: class Designer { companion object { } object Desk { } } fun Designer.Companion.fastPrototype() = "Prototype" fun Designer.Desk.portofolio() = listOf("Project1", "Project2") Calling this functions works like any normal object member function: Designer.fastPrototype() Designer.Desk.portofolio().forEach(::println) So there you have it! You now know how to take advantage of extension functions in Kotlin. If you found this tutorial helpful and would like to learn more, head on over to purchase the full book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. Forget C and Java. Learn Kotlin: the next universal programming language 5 reasons to choose Kotlin over Java Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform
Read more
  • 0
  • 0
  • 27625
article-image-build-user-directory-app-with-angular-tutorial
Sugandha Lahoti
05 Jul 2018
12 min read
Save for later

Build user directory app with Angular [Tutorial]

Sugandha Lahoti
05 Jul 2018
12 min read
In this article, we will learn how to build a user directory with Angular. The app will have a REST API which will be created during the course of this example. In this simple example, we'll be creating a users app which will be a table with a list of users together with their email addresses and phone numbers. Each user in the table will have an active state whose value is a boolean. We will be able to change the active state of a particular user from false to true and vice versa. The app will give us the ability to add new users and also delete users from the table. diskDB will be used as the database for this example. We will have an Angular service which contains methods that will be responsible for communicating with the REST end points. These methods will be responsible for making get, post, put, and delete requests to the REST API. The first method in the service will be responsible for making a get request to the API. This will enable us to retrieve all the users from the back end. Next, we will have another method that makes a post request to the API. This will enable us to add new users to the array of existing users. The next method we shall have will be responsible for making a delete request to the API in order to enable the deletion of a user. Finally, we shall have a method that makes a put request to the API. This will be the method that gives us the ability to edit/modify the state of a user. In order to make these requests to the REST API, we will have to make use of the HttpModule. The aim of this section is to solidify your knowledge of HTTP. As a JavaScript and, in fact, an Angular developer, you are bound to make interactions with APIs and web servers almost all the time. So much data used by developers today is in form of APIs and in order to make interactions with these APIs, we need to constantly make use of HTTP requests. As a matter of fact, HTTP is the foundation of data communication for the web. This article is an excerpt from the book, TypeScript 2.x for Angular Developers, written by Chris Nwamba. Create a new Angular app To start a new Angular app, run the following command: ng new user This creates the Angular 2 user app. Install the following dependencies: Express Body-parser Cors npm install express body-parser cors --save Create a Node server Create a file called server.js at the root of the project directory. This will be our node server. Populate server.js with the following block of code: // Require dependencies const express = require('express'); const path = require('path'); const http = require('http'); const cors = require('cors'); const bodyParser = require('body-parser'); // Get our API routes const route = require('./route'); const app = express(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); // Use CORS app.use(cors()); // Set our api routes app.use('/api', route); /** * Get port from environment. */ const port = process.env.PORT || '3000'; /** * Create HTTP server. */ const server = http.createServer(app); //Listen on provided port app.listen(port); console.log('server is listening'); What's going on here is pretty simple: We required and made use of the dependencies We defined and set the API routes We set a port for our server to listen to The API routes are being required from ./route, but this path does not exist yet. Let's quickly create it. At the root of the project directory, create a file called route.js. This is where the API routes will be made. We need to have a form of a database from where we can fetch, post, delete, and modify data. Just as in the previous example, we will make use of diskdb. The route will pretty much have the same pattern as in the first example. Install diskDB Run the following in the project folder to install diskdb: npm install diskdb Create a users.json file at the root of the project directory to serve as our database collection where we have our users' details. Populate users.json with the following: [{"name": "Marcel", "email": "test1@gmail.com", "phone_number":"08012345", "isOnline":false}] Now, update route.js. route.js const express = require('express'); const router = express.Router(); const db = require('diskdb'); db.connect(__dirname, ['users']); //save router.post('/users', function(req, res, next) { var user = req.body; if (!user.name && !(user.email + '') && !(user.phone_number + '') && !(user.isActive + '')) { res.status(400); res.json({ error: 'error' }); } else { console.log('ds'); db.users.save(todo); res.json(todo); } }); //get router.get('/users', function(req, res, next) { var foundUsers = db.users.find(); console.log(foundUsers); res.json(foundUsers); foundUsers = db.users.find(); console.log(foundUsers); }); //updateUsers router.put('/user/:id', function(req, res, next) { var updUser = req.body; console.log(updUser, req.params.id) db.users.update({_id: req.params.id}, updUser); res.json({ msg: req.params.id + ' updated' }); }); //delete router.delete('/user/:id', function(req, res, next) { console.log(req.params); db.users.remove({ _id: req.params.id }); res.json({ msg: req.params.id + ' deleted' }); }); module.exports = router; We've created a REST API with the API routes, using diskDB as the database. Start the server using the following command: node server.js The server is running and it is listening to the assigned port. Now, open up the browser and go to http://localhost:3000/api/users. Here, we can see the data that we imputed to the users.json file. This shows that our routes are working and we are getting data from the database. Create a new component Run the following command to create a new component: ng g component user This creates user.component.ts, user.component.html, user.component.css and user.component.spec.ts files. User.component.spec.ts is used for testing, therefore we will not be making use of it in this chapter. The newly created component is automatically imported into app.module.ts. We have to tell the root component about the user component. We'll do this by importing the selector from user.component.ts into the root template component (app.component.html): <div style="text-align:center"> <app-user></app-user> </div> Create a service The next step is to create a service that interacts with the API that we created earlier: ng generate service user This creates a user service called the user.service.ts. Next, import UserService class into app.module.ts and include it to the providers array: Import rxjs/add/operator/map in the imports section. import { Injectable } from '@angular/core'; import { Http, Headers } from '@angular/http'; import 'rxjs/add/operator/map'; Within the UserService class, define a constructor and pass in the angular 2 HTTP service. import { Injectable } from '@angular/core'; import { Http, Headers } from '@angular/http'; import 'rxjs/add/operator/map'; @Injectable() export class UserService { constructor(private http: Http) {} } Within the service class, write a method that makes a get request to fetch all users and their details from the API: getUser() { return this.http .get('http://localhost:3000/api/users') .map(res => res.json()); } Write the method that makes a post request and creates a new todo: addUser(newUser) { var headers = new Headers(); headers.append('Content-Type', 'application/json'); return this.http .post('http://localhost:3000/api/user', JSON.stringify(newUser), { headers: headers }) .map(res => res.json()); } Write another method that makes a delete request. This will enable us to delete a user from the collection of users: deleteUser(id) { return this.http .delete('http://localhost:3000/api/user/' + id) .map(res => res.json()); } Finally, write a method that makes a put request. This method will enable us to modify the state of a user: updateUser(user) { var headers = new Headers(); headers.append('Content-Type', 'application/json'); return this.http .put('http://localhost:3000/api/user/' + user._id, JSON.stringify(user), { headers: headers }) .map(res => res.json()); } Update app.module.ts to import HttpModule and FormsModule and include them to the imports array: import { HttpModule } from '@angular/http'; import { FormsModule } from '@angular/forms'; ..... imports: [ ..... HttpModule, FormsModule ] The next thing to do is to teach the user component to use the service: Import UserService in user.component.ts. import {UserService} from '../user.service'; Next, include the service class in the user component constructor. constructor(private userService: UserService) { }. Just below the exported UserComponent class, add the following properties and define their data types: users: any = []; user: any; name: any; email: any; phone_number: any; isOnline: boolean; Now, we can make use of the methods from the user service in the user component. Updating user.component.ts Within the ngOnInit method, make use of the user service to get all users from the API: ngOnInit() { this.userService.getUser().subscribe(users => { console.log(users); this.users = users; }); } Below the ngOnInit method, write a method that makes use of the post method in the user service to add new users: addUser(event) { event.preventDefault(); var newUser = { name: this.name, email: this.email, phone_number: this.phone_number, isOnline: false }; this.userService.addUser(newUser).subscribe(user => { this.users.push(user); this.name = ''; this.email = ''; this.phone_number = ''; }); } Let's make use of the delete method from the user service to enable us to delete users: deleteUser(id) { var users = this.users; this.userService.deleteUser(id).subscribe(data => { console.log(id); const index = this.users.findIndex(user => user._id == id); users.splice(index, 1) }); } Finally, we'll make use of user service to make put requests to the API: updateUser(user) { var _user = { _id: user._id, name: user.name, email: user.email, phone_number: user.phone_number, isActive: !user.isActive }; this.userService.updateUser(_user).subscribe(data => { const index = this.users.findIndex(user => user._id == _user._id) this.users[index] = _user; }); } We have all our communication with the API, service, and component. We have to update user.component.html in order to illustrate all that we have done in the browser. We'll be making use of bootstrap for styling. So, we have to import the bootstrap CDN in index.html: <!doctype html> <html lang="en"> <head> //bootstrap CDN <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M" crossorigin="anonymous"> <meta charset="utf-8"> <title>User</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> </head> <body> <app-root></app-root> </body> </html> Updating user.component.html Here is the component template for the user component: <form class="form-inline" (submit) = "addUser($event)"> <div class="form-row"> <div class="col"> <input type="text" class="form-control" [(ngModel)] ="name" name="name"> </div> <div class="col"> <input type="text" class="form-control" [(ngModel)] ="email" name="email"> </div> <div class="col"> <input type="text" class="form-control" [(ngModel)] ="phone_number" name="phone_number"> </div> </div> <br> <button class="btn btn-primary" type="submit" (click) = "addUser($event)"><h4>Add User</h4></button> </form> <table class="table table-striped" > <thead> <tr> <th>Name</th> <th>Email</th> <th>Phone_Number</th> <th>Active</th> </tr> </thead> <tbody *ngFor="let user of users"> <tr> <td>{{user.name}}</td> <td>{{user.email}}</td> <td>{{user.phone_number}}</td> <td>{{user.isActive}}</td> <td><input type="submit" class="btn btn-warning" value="Update Status" (click)="updateUser(user)" [ngStyle]="{ 'text-decoration-color:': user.isActive ? 'blue' : ''}"></td> <td><button (click) ="deleteUser(user._id)" class="btn btn-danger">Delete</button></td> </tr> </tbody> </table> A lot is going on in the preceding code, let's drill down into the code block: We have a form which takes in three inputs and a submit button which triggers the addUser() method when clicked There is a delete button which triggers the delete method when it is clicked There is also an update status input element that triggers the updateUser() method when clicked We created a table in which our users' details will be displayed utilizing Angular's *ngFor directive and Angular's interpolation binding syntax, {{}} Some extra styling will be added to the project. Go to user.component.css and add the following: form{ margin-top: 20px; margin-left: 20%; size: 50px; } table{ margin-top:20px; height: 50%; width: 50%; margin-left: 20%; } button{ margin-left: 20px; } Running the app Open up two command line interfaces/terminals. In both of them, navigate to the project directory. Run node server.js to start the server in one. Run ng serve in the other to serve the Angular 2 app. Open up the browser and go to localhost:4200. In this simple users app, we can perform all CRUD operations. We can create new users, get users, delete users, and update the state of users. By default, a newly added user's active state is false. That can be changed by clicking on the change state button. We created an Angular app from scratch for building a user directory. To know more, on how to write unit tests and perform debugging in Angular, check our book TypeScript 2.x for Angular Developers. Everything new in Angular 6: Angular Elements, CLI commands and more Why switch to Angular for web development – Interview with Minko Gechev Building Components Using Angular
Read more
  • 0
  • 0
  • 27551

article-image-how-we-think-ai-urge-ai-founding-fathers
Neil Aitken
31 May 2018
9 min read
Save for later

We must change how we think about AI, urge AI founding fathers

Neil Aitken
31 May 2018
9 min read
In Manhattan, nearly 15,000 Taxis make around 30 journeys each, per day. That’s nearly half a million paid trips. The yellow cabs are part of the never ending, slow progression of vehicles which churn through the streets of New York. The good news is, after a century of worsening traffic, congestion is about to be ameliorated, at least to a degree. Researchers at MIT announced this week, that they have developed an algorithm to optimise the way taxis find their customers. Their product is allegedly so efficient, it can reduce the required number of cabs (for now, the ones with human drivers) in Manhattan, by a third. That’s a non trivial improvement. The trick, apparently, is to use the cabs as a hustler might cue the ball in Pool – lining the next pick up to start where the last drop off ended. The technology behind the improvement offered by the MIT research team, is the same one that is behind most of the incredible technology news stories of the last 3 years – Artificial Intelligence. AI is now a part of most of the digital interactions we have. It fuels the recommendation engines in YouTube, Spotify and Netflix. It shows you products you might like in Google’s search results and on Amazon’s homepage. Undoubtedly, AI is the hot topic of the time – as you cannot possibly have failed to notice. How AI was created – and nearly died AI was, until recently, a long forgotten scientific curiosity, employed seriously only in Sci-Fi movies. The technology fell in to a ‘Winter’– a time when AI related projects couldn’t get funding and decision makers had given up on the technology - in the late 1980s. It was at that time that much of the fundamental work which underpins today’s AI, concepts like neural networks and backpropagation were codified. Artificial Intelligence is now enjoying a rebirth. Almost every new idea funded by Venture Capitalists has AI baked in. The potential excites business owners, especially those involved in the technology sphere, and scares governments in equal measure. It offers better profits and the potential for mass unemployment as if they are two sides of the same coin. Is is a one in a generation technology improvement, similar to Air Conditioning, mass produced motor car and the smartphone, in that it can be applied to all aspects of the economy at the same time. Just as the iPhone has propelled telecommunications technology forward, and created billions of dollars of sales for phone companies selling mobile data plans, AI is fueling totally new businesses and making existing operations significantly more efficient. Behind the fanfare associated with AI, however, lies a simple truth. Today’s AI algorithms use what’s called ‘narrow’ or ‘domain specific’ intelligence. In simple terms, each current AI implementation is specific to the job it is given. IBM trained their AI system ‘Watson’, to beat human contestants at ‘Jeopardy!’ When Google want to build an ‘AI product’ that can be used to beat a living counterpart at the Chinese board game ‘Go’, they create a new AI system. And so on. A new task requires a new AI system. Judea Pearl, inventor of Bayesian networks and Turing Awardee On AI systems that can move from predicting what will happen to what will cause something Now, one of the people behind those original concepts from the 1980s, which underpin today’s AI solutions is back with an even bigger idea which might push AI forward. Judea Pearl, Chancellor's professor of computer science and statistics at UCLA, and a distinguished visiting professor at the Technion, Israel Institute of Technology was awarded the Turing Award 30 years ago. This award was given to him for the Bayesian mathematical models, which gave modern AI its strength. Pearl’s fundamental contribution to computer science was in providing the logic and decision making framework for computers to operate under uncertainty. Some say it was he who provided the spark which thawed that AI winter. Today, he laments the current state of AI, concerned that the field has evolved very little in the last 3 decades since his important theory was presented. Pearl likens current AI implementations to simple tools which can tell you what’s likely to come next, based on the recognition of a familiar pattern. For example, a medical AI algorithm might be able to look at X-Rays of a human chest and ‘discern’ that the patient has, or does not have, lung cancer based on patterns it has learnt from its training datasets. The AI in this scenario doesn’t ‘know’ what lung cancer is or what a tumor is. Importantly, it is a very long way from understanding that smoking can cause the affliction. What’s needed in AI next, says Pearl, is a critical difference: AIs which are evolved to the point where they can determine not just what will happen next, but what will cause it. It’s a fundamental improvement, of the same magnitude as his earlier contributions. Causality – what Pearl is proposing - is one of the most basic units of scientific thought and progress. The ability to conduct a repeatable experiment, showing that A caused B, in multiple locations and have independent peers review the results is one of the fundamentals of establishing truth. In his most recent publication, ‘The Book Of Why’,  Pearl outlines how we can get AI, from where it is now, to where it can develop an understanding of these causal relationships. He believes the first step is to cement the building blocks of reality – ‘what is a lung’, ‘what is smoke’ and that we’ll be able to do in the next 10 years. Geoff Hinton, Inventor of backprop and capsule nets On AI which more closely mimics the human brain Geoff Hinton’s was the mind behind backpropagation, another of the fundamental technologies which has brought AI to the point it is at today. To progress AI, however, he says we might have to start all over again. Hinton has developed (and produced two papers for the University of Toronto to articulate) a new way of training AI systems, involving something he calls ‘Capsule Networks’ – a concept he’s been working on for 30 years, in an effort to improve the capabilities of the backpropagation algorithms he developed. Capsule networks operate in a manner similar to the human brain. When we see an image, our brains breaks it down to it’s components and processes them in parallel. Some brain neurons recognise edges through contrast differences. Others look for corners by examining the points at which edges intersect. Capsule Networks are similar, several acting on a picture at one time, identifying, for example, an ear or a nose on an animal, irrespective of the angle from which it is being viewed. This is a big deal as until now, CNNs (convolution neural networks), the set of AI algorithms that are most often used in image and video recognition systems, could recognize images as well as humans do. CNNs, however, find it hard to recognize images if their angle is changed. It’s too early to judge whether capsule networks are the key to the next step in the AI revolution, but in many tasks, Capsule Networks are identifying images faster and more accurately than current capabilities allow. Andrew Ng, Chief Scientist at Baidu On AI that can learn without humans Andrew Ng is the co-inventor of Google Brain, the team and project that Alphabet put together in 2011 to explore Artificial Intelligence. He now works for Baidu, China’s most successful search engine – analogous in size and scope to Google in the rest of the world. At the moment, he heads up Baidu’s Silicon Valley AI research facility. Beyond concerns over potential job displacement caused by AI, an issue so significant he says it is perhaps all we should be thinking about when it comes to Artificial Intelligence, he suggests that, in the future, the most progress will be made when AI systems can team themselves without human involvement. At the moment, training an AI, even on something that, to us is simple, such as what a cat looks like, is a complicated process. The procedure involves ‘supervised learning.’ It’s shown a lot of pictures (when they did this at Google, they used 10 million images), some of which are cats - labelled appropriately by humans. Once a sufficient level of ‘education’ has been undertaken, the AI can then accurately label cats, most of the time. Ng thinks supervision is problematic, he describes it as having an Achilles heel in the form of the quantity of data that is required. To go beyond current capabilities, says Ng, will require a completely new type of technology – one which can learn through ‘unsupervised learning’ -  machines learning from data that has not been classified by humans. Progress on unsupervised learning is slow. At both Baidu and Google, engineers are focussing on constrained versions of unsupervised learning such as training AI systems to learn about a human face and then using them to create a face themselves. The activity requires that the AI develops what we would call an ‘internal representation’ of a face – something which is required in any unsupervised learning. Other avenues to train without supervision include, ingeniously, pitting an AI system against a computer game – an environment in which they receive feedback (through points awarded in the game) for ‘constructive’ activities, but within which they are not taught directly by a human. Next generation AI depends on ‘scrubbing away’ existing assumptions Artificial Intelligence, as it stands will deliver economy wide efficiency improvements, the likes of which we have not seen in decades. It seems incredible to think that the field is still in its infancy when it can deliver such substantial benefits – like reduced traffic congestion, lower carbon emissions and saved time in New York Taxis. But it is. Isaac Azimov who developed his own concepts behind how Artificial Intelligence might be trained with simple rules said “Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won't come in.” The author should rest assured. Between them, Pearl, Hinton and Ng are each taking revolutionary approaches to elevate AI beyond even the incredible heights it has reached, and starting without reference to the concepts which have brought us this far. 5 polarizing Quotes from Professor Stephen Hawking on artificial intelligence Toward Safe AI – Maximizing your control over Artificial Intelligence Decoding the Human Brain for Artificial Intelligence to make smarter decisions
Read more
  • 0
  • 0
  • 27545

article-image-integrate-firebase-android-ios-applications
Savia Lobo
28 May 2018
26 min read
Save for later

How to integrate Firebase on Android/iOS applications natively

Savia Lobo
28 May 2018
26 min read
In this tutorial, you'll see Firebase integration within a native context, basically over an iOS and Android application. You will also implement some of the basic, as well as advanced features, that are found in any modern mobile application in both, Android and iOS ecosystems. So let's get busy! This article is an excerpt taken from the book,' Firebase Cookbook', written by Houssem Yahiaoui. Implement the pushing and retrieving of data from Firebase Real-time Database We're going to start first with Android and see how we can manage this feature: First, head to your Android Studio project. Now that you have opened your project, let's move on to integrating the Real-time Database. In your project, head to the Menu bar, navigate to Tools | Firebase, and then select Realtime Database. Now click Save and retrieve data. Since we've already connected our Android application to Firebase, let's now add the Firebase Real-time Database dependencies locally by clicking on the Add the Realtime Database to your app button. This will give you a screen that looks like the following screenshot:  Figure 1: Android Studio  Firebase integration section Click on the Accept Changes button and the gradle will add these new dependencies to your gradle file and download and build the project. Now we've created this simple wish list application. It might not be the most visually pleasing but will serve us well in this experiment with TextEdit, a Button, and a ListView. So, in our experiment we want to do the following: Add a new wish to our wish list Firebase Database See the wishes underneath our ListView Let's start with adding that list of data to our Firebase. Now head to your MainActivity.java file of any other activity related to your project and add the following code: //[*] UI reference. EditText wishListText; Button addToWishList; ListView wishListview; // [*] Getting a reference to the Database Root. DatabaseReference fRootRef = FirebaseDatabase.getInstance().getReference(); //[*] Getting a reference to the wishes list. DatabaseReference wishesRef = fRootRef.child("wishes"); protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //[*] UI elements wishListText = (EditText) findViewById(R.id.wishListText); addToWishList = (Button) findViewById(R.id.addWishBtn); wishListview = (ListView) findViewById(R.id.wishsList); } @Override protected void onStart() { super.onStart(); //[*] Listening on Button click event addToWishList.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { //[*] Getting the text from our EditText UI Element. String wish = wishListText.getText().toString(); //[*] Pushing the Data to our Database. wishesRef.push().setValue(wish); AlertDialog alertDialog = new AlertDialog.Builder(MainActivity.this).create(); alertDialog.setTitle("Success"); alertDialog.setMessage("wish was added to Firebase"); alertDialog.show(); } }); } In the preceding code, we're doing the following: Getting a reference to our UI elements Since everything in Firebase starts with a reference, we're grabbing ourselves a reference to the root element in our database We are getting another reference to the wishes child method from the root reference Over the OnCreate() method, we are binding all the UI-based references to the actual UI widgets Over the OnStart() method, we're doing the following: Listening to the button click event and grabbing the EditText content Using the wishesRef.push().setValue() method to push the content of the EditText automatically to Firebase, then we are displaying a simple AlertDialog as the UI preferences However, the preceding code is not going to work. This is strange since everything is well configured, but the problem here is that the Firebase Database is secured out of the box with authorization rules. So, head to Database | RULES and change the rules there, and then publish. After that is done, the result will look similar to the following screenshot:  Figure 2: Firebase Real-time Database authorization section After saving and launching the application, the pushed data result will look like this: Figure 3: Firebase Real-time Database after adding a new wish to the wishes collection Firebase creates the child element in case you didn't create it yourself. This is great because we can create and implement any data structure we want, however, we want. Next, let's see how we can retrieve the data we sent. Move back to your onStart() method and add the following code lines: wishesRef.addChildEventListener(new ChildEventListener() { @Override public void onChildAdded(DataSnapshot dataSnapshot, String s) { //[*] Grabbing the data Snapshot String newWish = dataSnapshot.getValue(String.class); wishes.add(newWish); adapter.notifyDataSetChanged(); } @Override public void onChildChanged(DataSnapshot dataSnapshot, String s) {} @Override public void onChildRemoved(DataSnapshot dataSnapshot) {} @Override public void onChildMoved(DataSnapshot dataSnapshot, String s) {} @Override public void onCancelled(DatabaseError databaseError) {} }); Before you implement the preceding code, go to the onCreate() method and add the following line underneath the UI widget reference: //[*] Adding an adapter. adapter = new ArrayAdapter<String>(this, R.layout.support_simple_spinner_dropdown_item, wishes); //[*] Wiring the Adapter wishListview.setAdapter(adapter);  Preceding that, in the variable declaration, simply add the following declaration: ArrayList<String> wishes = new ArrayList<String>(); ArrayAdapter<String> adapter; So, in the preceding code, we're doing the following: Adding a new ArrayList and an adapter for ListView changes. We're wiring everything in the onCreate() method. Wiring an addChildEventListener() in the wishes Firebase reference. Grabbing the data snapshot from the Firebase Real-time Database that is going to be fired whenever we add a new wish, and then wiring the list adapter to notify the wishListview which is going to update our Listview content automatically. Congratulations! You've just wired and exploited the Real-time Database functionality and created your very own wishes tracker. Now, let's see how we can create our very own iOS wishes tracker application using nothing but Swift and Firebase: Head directly to and fire up Xcode, and let's open up the project, where we integrated Firebase. Let's work on the feature. Edit your Podfile and add the following line: pod 'Firebase/Database' This will download and install the Firebase Database dependencies locally, in your very own awesome wishes tracker application. There are two view controllers, one for the wishes table and the other one for adding a new wish to the wishes list, the following represents the main wishes list view.   Figure 4: iOS application wishes list view Once we click on the + sign button in the Header, we'll be navigated with a segueway to a new ViewModal, where we have a text field where we can add our new wish and a button to push it to our list.         Figure 5: Wishes iOS application, in new wish ViewModel Over  addNewWishesViewController.swift, which is the view controller for adding the new wish view, after adding the necessary UITextField, @IBOutlet and the button @IBAction, replace the autogenerated content with the following code lines: import UIKit import FirebaseDatabase class newWishViewController: UIViewController { @IBOutlet weak var wishText: UITextField //[*] Adding the Firebase Database Reference var ref: FIRDatabaseReference? override func viewDidLoad() { super.viewDidLoad() ref = FIRDatabase.database().reference() } @IBAction func addNewWish(_ sender: Any) { let newWish = wishText.text // [*] Getting the UITextField content. self.ref?.child("wishes").childByAutoId().setValue( newWish!) presentedViewController?.dismiss(animated: true, completion:nil) } } In the preceding code, besides the self-explanatory UI element code, we're doing the following: We're using the FIRDatabaseReference and creating a new Firebase reference, and we're initializing it with viewDidLoad(). Within the addNewWish IBAction (function), we're getting the text from the UITextField, calling for the "wishes" child, then we're calling childByAutoId(), which will create an automatic id for our data (consider it a push function, if you're coming from JavaScript). We're simply setting the value to whatever we're going to get from the TextField. Finally, we're dismissing the current ViewController and going back to the TableViewController which holds all our wishes. Implementing anonymous authentication Authentication is one of the most tricky, time-consuming and tedious tasks in any web application. and of course, maintaining the best practices while doing so is truly a hard job to maintain. For mobiles, it's even more complex, because if you're using any traditional application it will mean that you're going to create a REST endpoint, an endpoint that will take an email and password and return either a session or a token, or directly a user's profile information. In Firebase, things are a bit different and in this recipe, we're going to see how we can use anonymous authentication—we will explain that in a second. You might wonder, but why? The why is quite simple: to give users an anonymous temporal, to protect data and to give users an extra taste of your application's inner soul. So let's see how we can make that happen. How to do it... We will first see how we can implement anonymous authentication in Android: Fire up your Android Studio. Before doing anything, we need to get some dependencies first, speaking, of course, of the Firebase Auth library that can be downloaded by adding this line to the build.gradle file under the dependencies section: compile 'com.google.firebase:firebase-auth:11.0.2' Now simply Sync and you will be good to start adding Firebase Authentication logic. Let us see what we're going to get as a final result: Figure 6: Android application: anonymous login application A simple UI with a button and a TextView, where we put our user data after a successful authentication process. Here's the code for that simple UI: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/ apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.hcodex.anonlogin.MainActivity"> <Button android:id="@+id/anonLoginBtn" android:layout_width="289dp" android:layout_height="50dp" android:text="Anounymous Login" android:layout_marginRight="8dp" app:layout_constraintRight_toRightOf="parent" android:layout_marginLeft="8dp" app:layout_constraintLeft_toLeftOf="parent" android:layout_marginTop="47dp" android:onClick="anonLoginBtn" app:layout_constraintTop_toBottomOf= "@+id/textView2" app:layout_constraintHorizontal_bias="0.506" android:layout_marginStart="8dp" android:layout_marginEnd="8dp" /> <TextView android:id="@+id/textView2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Firebase Anonymous Login" android:layout_marginLeft="8dp" app:layout_constraintLeft_toLeftOf="parent" android:layout_marginRight="8dp" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent" android:layout_marginTop="80dp" /> <TextView android:id="@+id/textView3" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Profile Data" android:layout_marginTop="64dp" app:layout_constraintTop_toBottomOf= "@+id/anonLoginBtn" android:layout_marginLeft="156dp" app:layout_constraintLeft_toLeftOf="parent" /> <TextView android:id="@+id/profileData" android:layout_width="349dp" android:layout_height="175dp" android:layout_marginBottom="28dp" android:layout_marginEnd="8dp" android:layout_marginLeft="8dp" android:layout_marginRight="8dp" android:layout_marginStart="8dp" android:layout_marginTop="8dp" android:text="" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintHorizontal_bias="0.526" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toBottomOf= "@+id/textView3" /> </android.support.constraint.ConstraintLayout> Now, let's see how we can wire up our Java code: //[*] Step 1 : Defining Logic variables. FirebaseAuth anonAuth; FirebaseAuth.AuthStateListener authStateListener; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); anonAuth = FirebaseAuth.getInstance(); setContentView(R.layout.activity_main); }; //[*] Step 2: Listening on the Login button click event. public void anonLoginBtn(View view) { anonAuth.signInAnonymously() .addOnCompleteListener( this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if(!task.isSuccessful()) { updateUI(null); } else { FirebaseUser fUser = anonAuth.getCurrentUser(); Log.d("FIRE", fUser.getUid()); updateUI(fUser); } }); } } //[*] Step 3 : Getting UI Reference private void updateUI(FirebaseUser user) { profileData = (TextView) findViewById( R.id.profileData); profileData.append("Anonymous Profile Id : n" + user.getUid()); } Now, let's see how we can implement anonymous authentication on iOS: What we'll achieve in this test is the following : Figure 7: iOS application, anonymous login application   Before doing anything, we need to download and install the Firebase authentication dependency first. Head directly over to your Podfile and the following line: pod 'Firebase/Auth' Then simply save the file, and on your terminal, type the following command: ~> pod install This will download the needed dependency and configure our application accordingly. Now create a simple UI with a button and after configuring your UI button IBAction reference, let's add the following code: @IBAction func connectAnon(_ sender: Any) { Auth.auth().signInAnonymously() { (user, error) in if let anon = user?.isAnonymous { print("i'm connected anonymously here's my id (user?.uid)") } } } How it works... Let's digest the preceding code: We're defining some basic logic variables; we're taking basically a TextView, where we'll append our results and define the Firebase  anonAuth variable. It's of FirebaseAuth type, which is the starting point for any authentication strategy that we might use. Over onCreate, we're initializing our Firebase reference and fixing our content view. We're going to authenticate our user by clicking a button bound with the anonLoginBtn() method. Within it, we're simply calling for the signInAnonymously() method, then if incomplete, we're testing if the authentication task is successful or not, else we're updating our TextEdit with the user information. We're using the updateUI method to simply update our TextField. Pretty simple steps. Now simply build and run your project and test your shiny new features. Implementing password authentication on iOS Email and password authentication is the most common way to authenticate anyone and it can be a major risk point if done wrong. Using Firebase will remove that risk and make you think of nothing but the UX that you will eventually provide to your users. In this recipe, we're going to see how you can do this on iOS. How to do it... Let's suppose you've created your awesome UI with all text fields and buttons and wired up the email and password IBOutlets and the IBAction login button. Let's see the code behind the awesome, quite simple password authentication process: import UIKit import Firebase import FirebaseAuth class EmailLoginViewController: UIViewController { @IBOutlet weak var emailField: UITextField! @IBOutlet weak var passwordField: UITextField! override func viewDidLoad() { super.viewDidLoad() } @IBAction func loginEmail(_ sender: Any) { if self.emailField.text</span> == "" || self.passwordField.text == "" { //[*] Prompt an Error let alertController = UIAlertController(title: "Error", message: "Please enter an email and password.", preferredStyle: .alert) let defaultAction = UIAlertAction(title: "OK", style: .cancel, handler: nil) alertController.addAction(defaultAction) self.present(alertController, animated: true, completion: nil) } else { FIRAuth.auth()?.signIn(withEmail: self.emailField.text!, password: self.passwordField.text!) { (user, error) in if error == nil { //[*] TODO: Navigate to Application Home Page. } else { //[*] Alert in case we've an error. let alertController = UIAlertController(title: "Error", message: error?.localizedDescription, preferredStyle: .alert) let defaultAction = UIAlertAction(title: "OK", style: .cancel, handler: nil) alertController.addAction(defaultAction) self.present(alertController, animated: true, completion: nil) } } } } } How it works ... Let's digest the preceding code: We're simply adding some IBOutlets and adding the IBAction login button. Over the loginEmail function, we're doing two things: If the user didn't provide any email/password, we're going to prompt them with an error alert indicating the necessity of having both fields. We're simply calling for the FIRAuth.auth().singIn() function, which in this case takes an Email and a Password. Then we're testing if we have any errors. Then, and only then, we might navigate to the app home screen or do anything else we want. We prompt them again with the Authentication Error message. And as simple as that, we're done. The User object will be transported, as well, so you may do any additional processing to the name, email, and much more. Implementing password authentication on Android To make things easier in terms of Android, we're going to use the awesome Firebase Auth UI. Using the Firebase Auth UI will save a lot of hassle when it comes to building the actual user interface and handling the different intent calls between the application activities. Let's see how we can integrate and use it for our needs. Let's start first by configuring our project and downloading all the necessary dependencies. Head to your build.gradle file and copy/paste the following entry: compile 'com.firebaseui:firebase-ui-auth:3.0.0' Now, simply sync and you will be good to start. How to do it... Now, let's see how we can make the functionality work: Declare the FirebaseAuth reference, plus add another variable that we will need later on: FirebaseAuth auth; private static final int RC_SIGN_IN = 17; Now, inside your onCreate method, add the following code: auth = FirebaseAuth.getInstance(); if(auth.getCurrentUser() != null) { Log.d("Auth", "Logged in successfully"); } else { startActivityForResult( AuthUI.getInstance() .createSignInIntentBuilder() .setAvailableProviders( Arrays.asList(new AuthUI.IdpConfig.Builder( AuthUI.EMAIL_PROVIDER).build())).build(), RC_SIGN_IN);findViewById(R.id.logoutBtn) .setOnClickListener(this); Now, in your activity, implement the View.OnClick listener. So your class will look like the following: public class MainActivity extends AppCompatActivity implements View.OnClickListener {} After that, implement the onClick function as shown here: @Override public void onClick(View v) { if(v.getId() == R.id.logoutBtn) { AuthUI.getInstance().signOut(this) .addOnCompleteListener( new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> task) { Log.d("Auth", "Logged out successfully"); // TODO: make custom operation. } }); } } In the end, implement the onActivityResult method as shown in the following code block: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(requestCode == RC_SIGN_IN) { if(resultCode == RESULT_OK) { //User is in ! Log.d("Auth",auth.getCurrentUser().getEmail()); } else { //User is not authenticated Log.d("Auth", "Not Authenticated"); } } } Now build and run your project. You will have a similar interface to that shown in the following screenshot: Figure 8: Android authentication using email/password:  email picker This interface will be shown in case you're not authenticated and your application will list all the saved accounts on your device. If you click on the NONE OF THE ABOVE button, you will be prompted with the following interface: Figure 9: Android authentication email/password: adding new email When you add your email and click on the NEXT button, the API will go and look for that user with that email in your application's users. If such an email is present, you will be authenticated, but if it's not the case, you will be redirected to the Sign-up activity as shown in the following screenshot: Figure 10: Android authentication: creating a new account, with email/password/name Next, you will add your name and password. And with that, you will create a new account and you will be authenticated. How it works... From the preceding code, it's clear that we didn't create any user interface. The Firebase UI is so powerful, so let's explore what happens: The setAvailableProviders method will take a list of providers—those providers will be different based on your needs, so it can be any email provider, Google, Facebook, and each and every provider that Firebase supports. The main difference is that each and every provider will have each separate configuration and necessary dependencies that you will need to support the functionality. Also, if you've noticed, we're setting up a logout button. We created this button mainly to log out our users and added a click listener to it. The idea here is that when you click on it, the application performs the Sign-out operation. Then you add your custom intent that will vary from a redirect to closing the application. If you noticed, we're implementing the onActivityResult special function and this one will be our main listening point whenever we connect or disconnect from the application. Within it, we can perform different operations from resurrection to displaying toasts, to anything that you can think of. Implementing Google Sign-in authentication Google authentication is the process of logging in/creating an account using nothing but your existing Google account. It's easy, fast, and intuitive and removes a lot of hustle we face, usually when we register any web/mobile application. I'm talking basically about form filling. Using Firebase Google Sign-in authentication, we can manage such functionality; plus we have had the user basic metadata such as the display name, picture URL, and more. In this recipe, we're going to see how we can implement Google Sign-in functionality for both Android and iOS. Before doing any coding, it's important to do some basic configuration in our Firebase Project console. Head directly to your Firebase project Console | Authentication | SIGN-IN METHOD | Google and simply activate the switch and follow the instructions there in order to get the client. Please notice that Google Sign-in is automatically configured for iOS, but for Android, we will need to do some custom configuration. Let us first look at getting ready for Android to implement Google Sign-in authentication: Before we start implementing the authentication functionality, we will need to install some dependencies first, so please head to your build.gradle file and paste the following, and then sync your build: compile 'com.google.firebase:firebase-auth:11.4.2' compile 'com.google.android.gms:play-services- auth:11.4.2' The dependency versions are dependable, and that means that whenever you want to install them, you will have to provide the same version for both dependencies. Moving on to getting ready  in iOS for implementation of Google Sign-in authentication: In iOS, we will need to install a couple of dependencies, so please go and edit your Podfile and add the following lines underneath your already present dependencies, if you have any: pod 'Firebase/Auth' pod 'GoogleSignIn' Now, in your terminal, type the following command: ~> pod install This command will install the required dependencies and configure your project accordingly. How to do it... First, let us take a look at how we will implement this recipe in Android: Now, after installing our dependencies, we will need to create the UI for our calls. To do that, simply copy and paste the following special button XML code into your layout: <com.google.android.gms.common.SignInButton android:id="@+id/gbtn" android:layout_width="368dp" android:layout_height="wrap_content" android:layout_marginLeft="16dp" android:layout_marginTop="30dp" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintTop_toTopOf="parent" android:layout_marginRight="16dp" app:layout_constraintRight_toRightOf="parent" /> The result will be this: Figure 11: Google Sign-in button after the declaration After doing that, let's see the code behind it: SignInButton gBtn; FirebaseAuth mAuth; GoogleApiClient mGoogleApiClient; private final static int RC_SIGN_IN = 3; FirebaseAuth.AuthStateListener mAuthListener; @Override protected void onStart() { super.onStart(); mAuth.addAuthStateListener(mAuthListener); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mAuth = FirebaseAuth.getInstance(); gBtn = (SignInButton) findViewById(R.id.gbtn); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { signIn(); } }); mAuthListener = new FirebaseAuth.AuthStateListener() { @Override public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) { if(firebaseAuth.getCurrentUser() != null) { AlertDialog alertDialog = new AlertDialog.Builder(MainActivity.this).create(); alertDialog.setTitle("User"); alertDialog.setMessage("I have a user loged in"); alertDialog.show(); } } }; mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this, new GoogleApiClient.OnConnectionFailedListener() { @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { Toast.makeText(MainActivity.this, "Something went wrong", Toast.LENGTH_SHORT).show(); } }) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .build(); } GoogleSignInOptions gso = new GoogleSignInOptions.Builder( GoogleSignInOptions.DEFAULT_SIGN_IN) .requestEmail() .build(); private void signIn() { Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent( mGoogleApiClient); startActivityForResult(signInIntent, RC_SIGN_IN); } @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == RC_SIGN_IN) { GoogleSignInResult result = Auth.GoogleSignInApi .getSignInResultFromIntent(data); if (result.isSuccess()) { // Google Sign In was successful, authenticate with Firebase GoogleSignInAccount account = result.getSignInAccount(); firebaseAuthWithGoogle(account); } else { Toast.makeText(MainActivity.this, "Connection Error", Toast.LENGTH_SHORT).show(); } } } private void firebaseAuthWithGoogle( GoogleSignInAccount account) { AuthCredential credential = GoogleAuthProvider.getCredential( account.getIdToken(), null); mAuth.signInWithCredential(credential) .addOnCompleteListener(this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if (task.isSuccessful()) { // Sign in success, update UI with the signed-in user's information Log.d("TAG", "signInWithCredential:success"); FirebaseUser user = mAuth.getCurrentUser(); Log.d("TAG", user.getDisplayName()); } else { Log.w("TAG", "signInWithCredential:failure", task.getException()); Toast.makeText(MainActivity.this, "Authentication failed.", Toast.LENGTH_SHORT) .show(); } // ... } }); } Then, simply build and launch your application, click on the authentication button, and you will be greeted with the following screen: Figure 12: Account picker, after clicking on Google Sign-in button. Next, simply pick the account you want to connect with, and then you will be greeted with an alert, finishing the authentication process. Now we will take a look at an implementation of our recipe in iOS: Before we do anything, let's import the Google Sign-in as follows: import GoogleSignIn After that, let's add our Google Sign-in button; to do so, go to your Login Page ViewController and add the following line of code: //Google sign in let googleBtn = GIDSignInButton() googleBtn.frame =CGRect(x: 16, y: 50, width: view.frame.width - 32, height: 50) view.addSubview(googleBtn) GIDSignIn.sharedInstance().uiDelegate = self The frame positioning is for my own needs—you can use it or modify the dimension to suit your application needs. Now, after adding the lines above, we will get an error. This is due to our ViewController not working well with the GIDSignInUIDelegate, so in order to make our xCode happier, let's add it to our ViewModal declaration so it looks like the following: class ViewController: UIViewController, FBSDKLoginButtonDelegate, GIDSignInUIDelegate {} Now, if you build and run your project, you will get the following: Figure 13: iOS application after configuring the Google Sign-in button Now, if you click on the Sign in button, you will get an exception. The reason for that is that the Sign in button is asking for the clientID, so to fix that, go to your AppDelegate file and complete the following import: import GoogleSignIn Next, add the following line of code within the application: didFinishLaunchingWithOptions as shown below: GIDSignIn.sharedInstance().clientID = FirebaseApp.app()?.options.clientID If you build and run the application now, then click on the Sign in button, nothing will happen. Why? Because iOS doesn't know how and where to navigate to next. So now, in order to fix that issue, go to your GoogleService-Info.plist file, copy the value of the REVERSED_CLIENT_ID, then go to your project configuration. Inside the Info section, scroll down to the URL types, add a new URL type, and paste the link inside the URL Schemes field: Figure 14: Xcode Firebase URL schema adding, to finish the Google Sign-in behavior Next, within the application: open URL options, add the following line: GIDSignIn.sharedInstance().handle(url, sourceApplication:options[ UIApplicationOpenURLOptionsKey.sourceApplication] as? String, annotation: options[UIApplicationOpenURLOptionsKey.annotation]) This will simply help the transition to the URL we already specified within the URL schemes. Next, if you build and run your application, tap on the Sign in button and you will be redirected using the SafariWebViewController to the Google Sign-in page, as shown in the following screenshot: Figure 15: iOS Google account picker after clicking on Sign-in button With that, the ongoing authentication process is done, but what will happen when you select your account and authorize the application? Typically, you need to go back to your application with all the needed profile information, don't you? isn't? Well, for now, it's not the case, so let's fix that. Go back to the AppDelegate file and do the following: Add the GIDSignInDelegate to the app delegate declaration Add the following line to the application: didFinishLaunchingWithOptions: GIDSignIn.sharedInstance().delegate = self This will simply let us go back to the application with all the tokens we need to finish the authentication process with Firebase. Next, we need to implement the signIn function that belongs to the GIDSignInDelegate; this function will be called once we're successfully authenticated: func sign(_ signIn: GIDSignIn!, didSignInFor user: GIDGoogleUser!, withError error: Error!) { if let err = error { print("Can't connect to Google") return } print("we're using google sign in", user) } Now, once you're fully authenticated, you will receive the success message over your terminal. Now we can simply integrate our Firebase authentication logic. Complete the following import: import FirebaseAuth  Next, inside the same signIn function, add the following: guard let authentication = user.authentication else { return } let credential = GoogleAuthProvider.credential(withIDToken: authentication.idToken, accessToken: authentication.accessToken) Auth.auth().signIn(with: credential, completion: {(user, error) in if let error = error { print("[*] Can't connect to firebase, with error :", error) } print("we have a user", user?.displayName) }) This code will use the successfully logged in user token and call the Firebase Authentication logic to create a new Firebase user. Now we can retrieve the basic profile information that Firebase delivers. How it works... Let's explain what we did in the Android section: We activated authentication using our Google account from the Firebase project console. We also installed the required dependencies, from Firebase Auth to Google services. After finishing the setup, we gained the ability to create that awesome Google Sign-in special button, and we also gave it an ID for easy access. We created references from SignInButton and FirebaseAuth. Let's now explain what we just did in the iOS section: We used the GIDSignButton in order to create the branded Google Sign-in button, and we added it to our ViewController. Inside the AppDelegate, we made a couple of configurations so we could retrieve our ClientID that the button needed to connect to our application. For our button to work, we used the information stored in GoogleService-Info.plist and created an app link within our application so we could navigate to our connection page. Once everything was set, we were introduced to our application authorization page where we authorized the application and chose the account we wanted to use to connect. In order to get back all the required tokens and account information, we needed to go back to the AppDelegate file and implement the GIDSignInDelegate. Within it, we could can all the account-related tokens and information, once we were successful authenticated. Within the implemented SignIn function, we injected our regular Firebase authentication signIn method with all necessary tokens and information. When we built and ran the application again and signed in, we found the account used to authenticate, present in the Firebase authenticated account. To summarize, we learned how to integrate Firebase within a native context, basically over an iOS and Android application. If you've enjoyed reading this, do check,'Firebase Cookbook' for recipes to help you understand features of Firebase and implement them in your existing web or mobile applications. Using the Firebase Real-Time Database How to integrate Firebase with NativeScript for cross-platform app development Build powerful progressive web apps with Firebase  
Read more
  • 0
  • 0
  • 27478
article-image-amazon-s3-security-access-policies
Savia Lobo
03 May 2018
7 min read
Save for later

Amazon S3 Security access and policies

Savia Lobo
03 May 2018
7 min read
In this article, you will get to know about Amazon S3, and the security access and policies associated with it.  AWS provides you with S3 as the object storage, where you can store your object files from 1 KB to 5 TB in size at a low cost. It's highly secure, durable, and scalable, and has unlimited capacity. It allows concurrent read/write access to objects by separate clients and applications. You can store any type of file in AWS S3 storage. [box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book,' Cloud Security Automation', written by Prashant Priyam.[/box] AWS keeps multiple copies of all the data stored in the standard S3 storage, which are replicated across devices in the region to ensure the durability of 99.999999999%. S3 cannot be used as block storage. AWS S3 storage is further categorized into three different sections: S3 Standard: This is suitable when we need durable storage for files with frequent access. Reduced Redundancy Storage: This is suitable when we have less critical data that is persistent in nature. Infrequent Access (IA): This is suitable when you have durable data with nonfrequent access. You can opt for Glacier. However, in Glacier, you have a very long retrieval time. So, S3 IA becomes a suitable option. It provides the same performance as the S3 Standard storage. AWS S3 has inbuilt error correction and fault tolerance capabilities. Apart from this, in S3 you have an option to enable versioning and cross-region replication (cross-origin resource sharing (CORS)). If you want to enable versioning on any existing bucket, versioning will be enabled only for new objects in that bucket, not for existing objects. This also happens in the case of CORS, where you can enable cross-region replication, but it will be applicable only for new objects. Security in Amazon S3 S3 is highly secure storage. Here, we can enable fine-grained access policies for resource access and encryption. To enable access-level security, you can use the following: S3 bucket policy IAM access policy MFA for object deletion The S3 bucket policy is a JSON code that defines what will be accessed by whom and at what level: { "Version": "2008-10-17", "Statement": [ { "Sid": "AllowPublicRead", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::prashantpriyam/*" ] } ] } In the preceding JSON code, we have just allowed read-only access to all the objects (as defined in the Action section) for an S3 bucket named prashantpriyam (defined in the Resource section). Similar to the S3 bucket policy, we can also define an IAM policy for S3 bucket access: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::prashantpriyam"] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::prashantpriyam/*"] } ] } In the preceding policy, we want to give the user full permissions on the S3 bucket from the AWS console as well. In the following section of policy (JSON code), we have granted permission to the user to get the bucket location and list all the buckets for traversal, but here we cannot perform other operations, such as getting object details from the bucket: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::prashantpriyam"] }, While in the second section of the policy (specified as follows), we have given permission to users to traverse into the prashantpriyam bucket and perform PUT, GET, and DELETE operations on the object: { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::prashantpriyam/*"] } MFA enables additional security on your account where, after password-based authentication, it asks you to provide the temporary code generated from AWS MFA. We can also use a virtual MFA such as Google Authenticator. AWS S3 supports MFA-based API, which helps to enforce MFA-based access policy on S3 bucket. Let's look at an example where we are giving users read-only access to a bucket while all other operations require an MFA token, which will expire after 600 seconds: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::prashantpriyam/priyam/*", "Condition": {"Null": {"aws:MultiFactorAuthAge": true }} }, { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::prashantpriyam/priyam/*", "Condition": {"NumericGreaterThan": {"aws:MultiFactorAuthAge": 600 }} }, { "Sid": "", "Effect": "Allow", "Principal": "*", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::prashantpriyam/*" } ] } In the preceding code, you can see that we have allowed all the operations on the S3 bucket if they have an MFA token whose life is less than 600 seconds. Apart from MFA, we can enable versioning so that S3 can automatically create multiple versions of the object to eliminate the risk of unwanted modification of data. This can be enabled with the AWS Console only. You can also enable cross-region replication so that the S3 bucket content can be replicated to the other selected regions. This option is mostly used when you want to deliver static content into two different regions, but it also gives you redundancy. For infrequently accessed data you can enable a lifecycle policy, which helps you to transfer the objects to a low-cost archival storage called Glacier. Let's see how to secure the S3 bucket using the AWS Console. To do this, we need to log in to the S3 bucket and search for S3. Now, click on the bucket you want to secure: In the screenshot, we have selected the bucket called velocis-manali-trip-112017 and, in the bucket properties, we can see that we have not enabled the security options that we have learned so far. Let's implement the security. Now, we need to click on the bucket and then on the Properties tab. From here, we can enable Versioning, Default encryption, Server access logging, and Object-level logging: To enable Server access logging, you need to specify the name of the bucket and a prefix for the logs: To enable encryption, you need to specify whether you want to use AES 256 or AWS KMS based encryption. Now, click on the Permission tab. From here, you will be able to define the Access Control List, Bucket Policy, and CORS configuration: In Access Control List, you can define who will access what and to what extent, in Bucket Policy you define resource-based permissions on the bucket (like we have seen in the example for bucket policy), and in CORS configuration we define the rule for CORS. Let's look at a sample CORS file: <!-- Sample policy --> <CORSConfiguration> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <AllowedHeader>Authorization</AllowedHeader> </CORSRule> </CORSConfiguration> It's an XML script that allows read-only permission to all the origins. In the preceding code, instead of a URL, the origin is the wildcard *, which means anyone. Now, click on the Management section. From here, we define the life cycle rule, replication, and so on: In life cycle rules, an S3 bucket object is transferred to the Standard-IA tier after 30 days and transferred to Glacier after 60 days. This is how we define security on the S3 bucket. To summarize, we learned about security access and policies in Amazon S3. If you've enjoyed reading this, do check out this book, 'Cloud Security Automation' to know how private cloud security functions can be automated for better time and cost-effectiveness. Creating and deploying an Amazon Redshift cluster Amazon Sagemaker makes machine learning on the cloud easy How cybersecurity can help us secure cyberspace
Read more
  • 0
  • 0
  • 27475

article-image-adding-fog-your-games
Packt
21 Sep 2015
8 min read
Save for later

Adding Fog to Your Games

Packt
21 Sep 2015
8 min read
In this article by Muhammad A.Moniem, author of the book Unreal Engine Lighting and Rendering Essentials speaks about rendering without mentioning one of the most and old (but important) rendering features since the rise of the 3D rendering. Fog effects have always been an essential part of any rendering engines regardless of the main goal of that engine. However, in games, it is a must to have this feature, not only because of the ambiance and feel it will give to the game, but because it will minimize the draw distance while rendering the large and open areas, which is great performance wise! The fog effects can be used for a lot of purposes, starting from adding ambiance to the world to setting a global mood (perhaps scary), to simulating a real environment, or even to distracting the players. By the end of this little article, you'll be able to: Understand both the fog types in Unreal Engine Understand the difference between both the fog types Master all the parameters to control the fog types Having said this, let's get started! (For more resources related to this topic, see here.) The fog types Unreal Engine provides the user with two varieties of fog; each has its own set of parameters to modify and provide different results of effects. The two supported fog types are as follows: The Atmospheric Fog The Exponential Height Fog The Atmospheric Fog The Atmospheric Fog gives an approximation of light scattering through a planetary atmosphere. It is the best fog method that can be used with a natural environment scene, such as landscape scenes. One of the most core features of this fog is that it gives your directional light a sun disc effect. Adding it to your game By adding an actor from the Visual Effects section of the Modes panel, or even from the actor's context menu by right-clicking on the scene view, you can install the Atmospheric Fog in your level directly. In the Visual Effects submenu of the Modes panel, you can find both the fog types listed here. In order to be able to control the quality of the final visual look of the recently inserted fog, you will have to do some tweaks for its properties attached to the actor. Sun Multiplier: This is an overall multiplier for the directional light's brightness. Increasing this value will not only brighten the fog color, but will also brighten the sky color as well. Fog Multiplier: This is a multiplier that affects only the fog color (does not affect the directional light). Density Multiplier: This is a fog density multiplier (does not affect the directional light). Density Offset: This is a fog opacity controller. Distance Scale: This is a distance factor that is compared to the Unreal unit scale. This value is more effective for a very small world. As the world size increases, you will need to increase this value too, as larger values cause changes in the fog attenuation to take place faster. Altitude Scale: This is the scale along the z axis. Distance Offset: This is the distance offset, calculated in km, is used to manage the large distances. Ground Offset: This is an offset for the sea level. (normally, the sea level is 0, and as the fog system does not work for regions below the sea level, you need to make sure that all the terrain remains above this value in order to guarantee that the fog works.) Start Distance: This is the distance from the camera lens that the fog will start from. Sun Disk Scale: This is the size of the sun disk, but keep in mind that this can't be 0, as earlier there was an option to disable the sun disk, but in order to keep it real, Epic decided to remove this option and keep the sun disk, but it gives you the chance to make it as small as possible. Precompute Params: The properties included in this group need recomputation of precomputed texture data: Density Height: This is the fog density decay height controller. The lower the values, the denser the fog will be, while the higher the values, the less scatter the fog will have. Max Scattering Num: This sets a limit on the number of scattering calculations. Inscatter Altitude Sample Number: This is the number of different altitudes at which you can sample inscatter color. The Exponential Height Fog This type of fog has its own unique requirement. While the Atmospheric Fog can be added anytime or anywhere and it works, the Exponential Height Fog requires a special type of map where there are low and high bounds, as its mechanic includes creating more density in the low places of a map and less density in the high places of the map. Between both these areas, there will be a smooth transition. One of the most interesting features of the Exponential Height Fog is that is has two fog colors: one for the hemisphere facing the dominant directional light and another color for the opposite hemisphere. Adding it to your game As mentioned earlier, to add the volume type from the same Visual Effects section of the Modes panel is very simple. You can select the Exponential Height Fog actor and drag and drop it into the scene. As you can see, even the icon implies the high and low places from the sea level. In order to be able to control the final visual look of the recently inserted fog, you would have to do some tweaks for its properties attached to the actor: Fog Density: This is the global density controller of the fog. Fog Inscattering Color: This is the inscattering color for the fog (the primary color). In the following image, you can see how different values work: Fog Height Falloff: This is the Height density controller that controls how the density increases as the height decreases. Fog Max Opacity: This controls the maximum opacity of the fog. A value of 0 means the fog will be invisible. Start Distance: This is the distance from the camera where the fog will start. Directional Inscattering Exponent: This controls the size of the directional inscattering cone. The higher the value, the clearer vision you get, while the lower the value, the more fog dense you get. Directional Inscattering Start Distance: This controls the start distance from the viewer of the directional inscattering. Directional Inscattering Color: This sets the color for directional inscattering that is used to approximate inscattering from a directional light. Visible: This controls the fog visibility. Actor Hidden in Game: This enables or disables the fog in the game (it will not affect the editing mode). Editor Billboard Scale: This is the scale of the billboard components in the editor. The animated fog Almost like any other thing in Unreal Engine, you can do some animations for it. Some parts of the engine are super responsive to the animation system, while other parts have a limited access. However, speaking of the fog, it has a limited access in order to animate some values. You can use different ways and methods to animate values at runtime or even during the edit mode. The color The height fog color can be changed at runtime using the LinearColor Property Track in the Matinee Editor. By performing the following given steps, you can change the height fog color in the game: Create a new Matinee Actor. Open the newly created actor in the Matinee Editor. Create a Height Fog Actor. Create a group in Matinee. Attach the Height Fog Actor from the scene to the group created in the previous step. Create a linear color property track in the group. Choose the FogInscatteringColor or DirectionalInscatteringColor to control its value (using two colors is an advantage of that fog type, remember!). Add keyframes to the track, and set the color for them. Animating the Exponential Height Fog In order to animate the Exponential Height Fog, you can use one of the following two ways: Use Matinee to animate the Exponential Height Fog Actor values Use a timeline node in the Level Blueprint and control the Exponential Height Fog Actor values Summary In this article, you learned about the fog effects and the supported types in the Unreal Editor, the different parameters, and how to use any of the fog types. Now, it is recommended that you go ahead directly to your editor, and start adding some fog and play with its values. Even better if you can start to do some animation for the parameters as mentioned earlier. Don't just try in the Edit mode; sometimes, the results are different when you hit play or even more different when you cook a build, so feel free to build any level you made in an executable and check the results. Resources for Article: Further resources on this subject: Exploring and Interacting with Materials using Blueprints[article] Creating a Brick Breaking Game[article] The Unreal Engine [article]
Read more
  • 0
  • 0
  • 27463
Modal Close icon
Modal Close icon