Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-no-new-peps-will-be-approved-for-python-until-a-new-bdfl-is-chosen-next-year
Natasha Mathur
24 Jul 2018
3 min read
Save for later

No new PEPS will be approved for Python in 2018, with BDFL election pending.

Natasha Mathur
24 Jul 2018
3 min read
According to an email exchange, Python will not approve of any proposals or PEPs until January 2019 as the team is planning to choose a new leader, with a deadline set for January 1, 2019. The news comes after Python founder and “Benevolent Dictator for Life (BDFL)” Guido van Rossum, announced earlier this month that he was standing down from the decision-making process.   Here’s a quick look at Why Guido may have quit as the Python chief (BDFL). A PEP - or Python Enhancement Proposal - is a design document consisting of information important to the Python community. If a new feature is to be added to the language, it is first detailed in a PEP. It will include technical specifications as well as the rationale for the feature. A passionate discussion regarding PEP 576, 579 and 580 started last week on the python-dev community with Jeroen Demeyer, a Cython contributor, stating how he finally came up with real-life benchmarks proving why a faster C calling protocol is required in Python. He implemented Cython compilation of SageMath on Python 2.7.15 (SageMath is not yet ported to Python 3). But, he mentioned that the conclusions of his implementation should be valid for newer Python versions as well. Guido’s, in his capacity as a core developer, responded back to this implementation request. In the email thread, he writes that “Jeroen was asked to provide benchmarks but only provided them for Python 2. The reasoning that not much has changed could affect the benchmarks feels a bit optimistic, that's all. The new BDFL may be less demanding though.” Stefan Behnel, a core Cython developer, was disappointed with Rossum’s response.. He writes that because Demeyer has already done “the work to clean up the current implementation… it would be very sad if this chance was wasted, and we would have to remain with the current implementation” But it seems that Guido is not convinced. According to Guido, no PEPS will be approved until the Python core dev team have elected a “new BDFL or come up with some other process for accepting PEPs, no action will be taken on this PEP.” The mail says “right now, there is no-one who can approve this PEP, and you will have to wait until 2019 until there is”. Many Python users are worried about this decision:                    Reddit All that’s left to do now is wait for more updates from the Python-dev community. Behnel states “I just hope that Python development won't stall completely. Even if no formal action can be taken on this PEP (or any other), I hope that there will still be informal discussion”. For more coverage on this news, check out the official mail posts on python-dev. Top 7 Python programming books you need to read Python, Tensorflow, Excel and more – Data professionals reveal their top tools  
Read more
  • 0
  • 0
  • 18382

article-image-build-hadoop-clusters-using-google-cloud-platform-tutorial
Sunith Shetty
24 Jul 2018
10 min read
Save for later

Build Hadoop clusters using Google Cloud Platform [Tutorial]

Sunith Shetty
24 Jul 2018
10 min read
Cloud computing has transformed the way individuals and organizations access and manage their servers and applications on the internet. Before Cloud computing, everyone used to manage their servers and applications on their own premises or on dedicated data centers. The increase in the raw computing power of computing (CPU and GPU) of multiple-cores on a single chip and the increase in the storage space (HDD and SSD) present challenges in efficiently utilizing the available computing resources. In today's tutorial, we will learn different ways of building Hadoop cluster on the Cloud and ways to store and access data on Cloud. This article is an excerpt from a book written by Naresh Kumar and Prashant Shindgikar titled Modern Big Data Processing with Hadoop. Building Hadoop cluster in the Cloud Cloud offers a flexible and easy way to rent resources such as servers, storage, networking, and so on. The Cloud has made it very easy for consumers with the pay-as-you-go model, but much of the complexity of the Cloud is hidden from us by the providers. In order to better understand whether Hadoop is well suited to being on the Cloud, let's try to dig further and see how the Cloud is organized internally. At the core of the Cloud are the following mechanisms: A very large number of servers with a variety of hardware configurations Servers connected and made available over IP networks Large data centers to host these devices Data centers spanning geographies with evolved network and data center designs If we pay close attention, we are talking about the following: A very large number of different CPU architectures A large number of storage devices with a variety of speeds and performance Networks with varying speed and interconnectivity Let's look at a simple design of such a data center on the Cloud:We have the following devices in the preceding diagram: S1, S2: Rack switches U1-U6: Rack servers R1: Router Storage area network Network attached storage As we can see, Cloud providers have a very large number of such architectures to make them scalable and flexible. You would have rightly guessed that when the number of such servers increases and when we request a new server, the provider can allocate the server anywhere in the region. This makes it a bit challenging for compute and storage to be together but also provides elasticity. In order to address this co-location problem, some Cloud providers give the option of creating a virtual network and taking dedicated servers, and then allocating all their virtual nodes on these servers. This is somewhat closer to a data center design, but flexible enough to return resources when not needed. Let's get back to Hadoop and remind ourselves that in order to get the best from the Hadoop system, we should have the CPU power closer to the storage. This means that the physical distance between the CPU and the storage should be much less, as the BUS speeds match the processing requirements. The slower the I/O speed between the CPU and the storage (for example, iSCSI, storage area network, network attached storage, and so on) the poorer the performance we get from the Hadoop system, as the data is being fetched over the network, kept in memory, and then fed to the CPU for further processing. This is one of the important things to keep in mind when designing Hadoop systems on the Cloud. Apart from performance reasons, there are other things to consider: Scaling Hadoop Managing Hadoop Securing Hadoop Now, let's try to understand how we can take care of these in the Cloud environment. Hadoop can be installed by the following methods: Standalone Semi-distributed Fully-distributed When we want to deploy Hadoop on the Cloud, we can deploy it using the following ways: Custom shell scripts Cloud automation tools (Chef, Ansible, and so on) Apache Ambari Cloud vendor provided methods Google Cloud Dataproc Amazon EMR Microsoft HDInsight Third-party managed Hadoop Cloudera Cloud agnostic deployment Apache Whirr Google Cloud Dataproc In this section, we will learn how to use Google Cloud Dataproc to set up a single node Hadoop cluster. The steps can be broken down into the following: Getting a Google Cloud account. Activating Google Cloud Dataproc service. Creating a new Hadoop cluster. Logging in to the Hadoop cluster. Deleting the Hadoop cluster. Getting a Google Cloud account This section assumes that you already have a Google Cloud account. Activating the Google Cloud Dataproc service Once you log in to the Google Cloud console, you need to visit the Cloud Dataproc service. The activation screen looks something like this: Creating a new Hadoop cluster Once the Dataproc is enabled in the project, we can click on Create to create a new Hadoop cluster. After this, we see another screen where we need to configure the cluster parameters: I have left most of the things to their default values. Later, we can click on the Create button which creates a new cluster for us. Logging in to the cluster After the cluster has successfully been created, we will automatically be taken to the cluster lists page. From there, we can launch an SSH window to log in to the single node cluster we have created. The SSH window looks something like this: As you can see, the Hadoop command is readily available for us and we can run any of the standard Hadoop commands to interact with the system. Deleting the cluster In order to delete the cluster, click on the DELETE button and it will display a confirmation window, as shown in the following screenshot. After this, the cluster will be deleted: Looks so simple, right? Yes. Cloud providers have made it very simple for users to use the Cloud and pay only for the usage. Data access in the Cloud The Cloud has become an important destination for storing both personal data and business data. Depending upon the importance and the secrecy requirements of the data, organizations have started using the Cloud to store their vital datasets. The following diagram tries to summarize the various access patterns of typical enterprises and how they leverage the Cloud to store their data: Cloud providers offer different varieties of storage. Let's take a look at what these types are: Block storage File-based storage Encrypted storage Offline storage Block storage This type of storage is primarily useful when we want to use this along with our compute servers, and want to manage the storage via the host operating system. To understand this better, this type of storage is equivalent to the hard disk/SSD that comes with our laptops/MacBook when we purchase them. In case of laptop storage, if we decide to increase the capacity, we need to replace the existing disk with another one. When it comes to the Cloud, if we want to add more capacity, we can just purchase another larger capacity storage and attach it to our server. This is one of the reasons why the Cloud has become popular as it has made it very easy to add or shrink the storage that we need. It's good to remember that, since there are many different types of access patterns for our applications, Cloud vendors also offer block storage with varying storage/speed requirements measured with their own capacity/IOPS, and so on. Let's take an example of this capacity upgrade requirement and see what we do to utilize this block storage on the Cloud. In order to understand this, let's look at the example in this diagram: Imagine a server created by the administrator called DB1 with an original capacity of 100 GB. Later, due to unexpected demand from customers, an application started consuming all the 100 GB of storage, so the administrator has decided to increase the capacity to 1 TB (1,024 GB). This is what the workflow looks like in this scenario: Create a new 1 TB disk on the Cloud Attach the disk to the server and mount it Take a backup of the database Copy the data from the existing disk to the new disk Start the database Verify the database Destroy the data on the old disk and return the disk This process is simplified but in production this might take some time, depending upon the type of maintenance that is being performed by the administrator. But, from the Cloud perspective, acquiring new block storage is very quick. File storage Files are the basics of computing. If you are familiar with UNIX/Linux environments, you already know that, everything is a file in the Unix world. But don't get confused with that as every operating system has its own way of dealing with hardware resources. In this case we are not worried about how the operating system deals with hardware resources, but we are talking about the important documents that the users store as part of their day-to-day business. These files can be: Movie/conference recordings Pictures Excel sheets Word documents Even though they are simple-looking files in our computer, they can have significant business importance and should be dealt with in a careful fashion, when we think of storing these on the Cloud. Most Cloud providers offer an easy way to store these simple files on the Cloud and also offer flexibility in terms of security as well. A typical workflow for acquiring the storage of this form is like this: Create a new storage bucket that's uniquely identified Add private/public visibility to this bucket Add multi-geography replication requirement to the data that is stored in this bucket Some Cloud providers bill their customers based on the number of features they select as part of their bucket creation. Please choose a hard-to-discover name for buckets that contain confidential data, and also make them private. Encrypted storage This is a very important requirement for business critical data as we do not want the information to be leaked outside the scope of the organization. Cloud providers offer an encryption at rest facility for us. Some vendors choose to do this automatically and some vendors also provide flexibility in letting us choose the encryption keys and methodology for the encrypting/decrypting data that we own. Depending upon the organization policy, we should follow best practices in dealing with this on the Cloud. With the increase in the performance of storage devices, encryption does not add significant overhead while decrypting/encrypting files. This is depicted in the following image: Continuing the same example as before, when we choose to encrypt the underlying block storage of 1 TB, we can leverage the Cloud-offered encryption where they automatically encrypt and decrypt the data for us. So, we do not have to employ special software on the host operating system to do the encryption and decryption. Remember that encryption can be a feature that's available in both the block storage and file-based storage offer from the vendor. Cold storage This storage is very useful for storing important backups in the Cloud that are rarely accessed. Since we are dealing with a special type of data here, we should also be aware that the Cloud vendor might charge significantly high amounts for data access from this storage, as it's meant to be written once and forgetten (until it's needed). The advantage with this mechanism is that we have to pay lesser amounts to store even petabytes of data. We looked at the different steps involved in building our own Hadoop cluster on the Cloud. And we saw different ways of storing and accessing our data on the Cloud. To know more about how to build expert Big Data systems, do checkout this book Modern Big Data Processing with Hadoop. Read More: What makes Hadoop so revolutionary? Machine learning APIs for Google Cloud Platform Getting to know different Big data Characteristics
Read more
  • 0
  • 0
  • 29956

article-image-javascript-async-programming-using-promises-tutorial
Pavan Ramchandani
23 Jul 2018
10 min read
Save for later

JavaScript async programming using Promises [Tutorial]

Pavan Ramchandani
23 Jul 2018
10 min read
JavaScript now has a new native pattern for writing asynchronous code called the Promise pattern. This new pattern removes the common code issues that the event and callback pattern had. It also makes the code look more like synchronous code. A promise (or a Promise object) represents an asynchronous operation. Existing asynchronous JavaScript APIs are usually wrapped with promises, and the new JavaScript APIs are purely implemented using promises. Promises are new in JavaScript but are already present in many other programming languages. Programming languages, such as C# 5, C++ 11, Swift, Scala, and more are some examples that support promises. In this tutorial, we will see how to use promises in JavaScript. This article is an excerpt from the book, Learn ECMAScript - Second Edition, written by Mehul Mohan and Narayan Prusty. Promise states A promise is always in one of these states: Fulfilled: If the resolve callback is invoked with a non-promise object as the argument or no argument, then we say that the promise is fulfilled Rejected: If the rejecting callback is invoked or an exception occurs in the executor scope, then we say that the promise is rejected Pending: If the resolve or reject callback is yet to be invoked, then we say that the promise is pending Settled: A promise is said to be settled if it's either fulfilled or rejected, but not pending Once a promise is fulfilled or rejected, it cannot be transitioned back. An attempt to transition it will have no effect. Promises versus callbacks Suppose you wanted to perform three AJAX requests one after another. Here's a dummy implementation of that in callback-style: ajaxCall('http://example.com/page1', response1 => { ajaxCall('http://example.com/page2'+response1, response2 => { ajaxCall('http://example.com/page3'+response2, response3 => { console.log(response3) } }) }) You can see how quickly you can enter into something known as callback-hell. Multiple nesting makes code not only unreadable but also difficult to maintain. Furthermore, if you start processing data after every call, and the next call is based on a previous call's response data, the complexity of the code will be unmatchable. Callback-hell refers to multiple asynchronous functions nested inside each other's callback functions. This makes code harder to read and maintain. Promises can be used to flatten this code. Let's take a look: ajaxCallPromise('http://example.com/page1') .then( response1 => ajaxCallPromise('http://example.com/page2'+response1) ) .then( response2 => ajaxCallPromise('http://example.com/page3'+response2) ) .then( response3 => console.log(response3) ) You can see the code complexity is suddenly reduced and the code looks much cleaner and readable. Let's first see how ajaxCallPromise would've been implemented. Please read the following explanation for more clarity of preceding code snippet. Promise constructor and (resolve, reject) methods To convert an existing callback type function to Promise, we have to use the Promise constructor. In the preceding example, ajaxCallPromise returns a Promise, which can be either resolved or rejected by the developer. Let's see how to implement ajaxCallPromise: const ajaxCallPromise = url => { return new Promise((resolve, reject) => { // DO YOUR ASYNC STUFF HERE $.ajaxAsyncWithNativeAPI(url, function(data) { if(data.resCode === 200) { resolve(data.message) } else { reject(data.error) } }) }) } Hang on! What just happened there? First, we returned Promise from the ajaxCallPromise function. That means whatever we do now will be a Promise. A Promise accepts a function argument, with the function itself accepting two very special arguments, that is, resolve and reject. resolve and reject are themselves functions. When, inside a Promise constructor function body, you call resolve or reject, the promise acquires a resolved or rejected value that is unchangeable later on. We then make use of the native callback-based API and check if everything is OK. If everything is indeed OK, we resolve the Promise with the value being the message sent by the server (assuming a JSON response). If there was an error in the response, we reject the promise instead. You can return a promise in a then call. When you do that, you can flatten the code instead of chaining promises again.For example, if foo() and bar() both return Promise, then, instead of: foo().then( res => { bar().then( res2 => { console.log('Both done') }) }) We can write it as follows: foo() .then( res => bar() ) // bar() returns a Promise .then( res => { console.log('Both done') }) This flattens the code. The then (onFulfilled, onRejected) method The then() method of a Promise object lets us do a task after a Promise has been fulfilled or rejected. The task can also be another event-driven or callback-based asynchronous operation. The then() method of a Promise object takes two arguments, that is, the onFulfilled and onRejected callbacks. The onFulfilled callback is executed if the Promise object was fulfilled, and the onRejected callback is executed if the Promise was rejected. The onRejected callback is also executed if an exception is thrown in the scope of the executor. Therefore, it behaves like an exception handler, that is, it catches the exceptions. The onFulfilled callback takes a parameter, that is, the fulfilment value of the promise. Similarly, the onRejected callback takes a parameter, that is, the reason for rejection: ajaxCallPromise('http://example.com/page1').then( successData => { console.log('Request was successful') }, failData => { console.log('Request failed' + failData) } ) When we reject the promise inside the ajaxCallPromise definition, the second function will execute (failData one) instead of the first function. Let's take one more example by converting setTimeout() from a callback to a promise. This is how setTimeout() looks: setTimeout( () => { // code here executes after TIME_DURATION milliseconds }, TIME_DURATION) A promised version will look something like the following: const PsetTimeout = duration => { return new Promise((resolve, reject) => { setTimeout( () => { resolve() }, duration); }) } // usage: PsetTimeout(1000) .then(() => { console.log('Executes after a second') }) Here we resolved the promise without a value. If you do that, it gets resolved with a value equal to undefined. The catch (onRejected) method The catch() method of a Promise object is used instead of the then() method when we use the then() method only to handle errors and exceptions. There is nothing special about how the catch() method works. It's just that it makes the code much easier to read, as the word catch makes it more meaningful. The catch() method just takes one argument, that is, the onRejected callback. The onRejected callback of the catch() method is invoked in the same way as the onRejected callback of the then() method. The catch() method always returns a promise. Here is how a new Promise object is returned by the catch() method: If there is no return statement in the onRejected callback, then a new fulfilled Promise is created internally and returned. If we return a custom Promise, then it internally creates and returns a new Promise object. The new promise object resolves the custom promise object. If we return something else other than a custom Promise in the onRejected callback, then a new Promise object is created internally and returned. The new Promise object resolves the returned value. If we pass null instead of the onRejected callback or omit it, then a callback is created internally and used instead. The internally created onRejected callback returns a rejected Promise object. The reason for the rejection of the new Promise object is the same as the reason for the rejection of a parent Promise object. If the Promise object to which catch() is called gets fulfilled, then the catch() method simply returns a new fulfilled promise object and ignores the onRejected callback. The fulfillment value of the new Promise object is the same as the fulfillment value of the parent Promise. To understand the catch() method, consider this code: ajaxPromiseCall('http://invalidURL.com') .then(success => { console.log(success) }, failed => { console.log(failed) }); This code can be rewritten in this way using the catch() method: ajaxPromiseCall('http://invalidURL.com') .then(success => console.log(success)) .catch(failed => console.log(failed)); These two code snippets work more or less in the same way. The Promise.resolve(value) method The resolve() method of the Promise object takes a value and returns a Promise object that resolves the passed value. The resolve() method is basically used to convert a value to a Promise object. It is useful when you find yourself with a value that may or may not be a Promise, but you want to use it as a Promise. For example, jQuery promises have different interfaces from ES6 promises. Therefore, you can use the resolve() method to convert jQuery promises into ES6 promises. Here is an example that demonstrates how to use the resolve() method: const p1 = Promise.resolve(4); p1.then(function(value){ console.log(value); }); //passed a promise object Promise.resolve(p1).then(function(value){ console.log(value); }); Promise.resolve({name: "Eden"}) .then(function(value){ console.log(value.name); }); The output is as follows: 4 4 Eden The Promise.reject(value) method The reject() method of the Promise object takes a value and returns a rejected Promise object with the passed value as the reason. Unlike the Promise.resolve() method, the reject() method is used for debugging purposes and not for converting values into promises. Here is an example that demonstrates how to use the reject() method: const p1 = Promise.reject(4); p1.then(null, function(value){ console.log(value); }); Promise.reject({name: "Eden"}) .then(null, function(value){ console.log(value.name); }); The output is as follows: 4 Eden The Promise.all(iterable) method The all() method of the Promise object takes an iterable object as an argument and returns a Promise that fulfills when all of the promises in the iterable object have been fulfilled. This can be useful when we want to execute a task after some asynchronous operations have finished. Here is a code example that demonstrates how to use the Promise.all() method: const p1 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve(); }, 1000); }); const p2 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve(); }, 2000); }); const arr = [p1, p2]; Promise.all(arr).then(function(){ console.log("Done"); //"Done" is logged after 2 seconds }); If the iterable object contains a value that is not a Promise object, then it's converted to the Promise object using the Promise.resolve() method. If any of the passed promises get rejected, then the Promise.all() method immediately returns a new rejected Promise for the same reason as the rejected passed Promise. Here is an example to demonstrate this: const p1 = new Promise(function(resolve, reject){ setTimeout(function(){ reject("Error"); }, 1000); }); const p2 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve(); }, 2000); }); const arr = [p1, p2]; Promise.all(arr).then(null, function(reason){ console.log(reason); //"Error" is logged after 1 second }); The Promise.race(iterable) method The race() method of the Promise object takes an iterable object as the argument and returns a Promise that fulfills or rejects as soon as one of the promises in the iterable object is fulfilled or rejected, with the fulfillment value or reason from that Promise. As the name suggests, the race() method is used to race between promises and see which one finishes first. Here is a code example that shows how to use the race() method: var p1 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve("Fulfillment Value 1"); }, 1000); }); var p2 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve("fulfillment Value 2"); }, 2000); }); var arr = [p1, p2]; Promise.race(arr).then(function(value){ console.log(value); //Output "Fulfillment value 1" }, function(reason){ console.log(reason); }); Now at this point, I assume you have a basic understanding of how promises work, what they are, and how to convert a callback-like API into a promised API. Let's take a look at async/await, the future of asynchronous programming. If you found this article useful, do check out the book Learn ECMAScript, Second Edition for learning the ECMAScript standards to design your web applications. Implementing 5 Common Design Patterns in JavaScript (ES8) What's new in ECMAScript 2018 (ES9)? How to build a weather app using Kotlin for JavaScript
Read more
  • 0
  • 0
  • 30970

article-image-get-to-know-asp-net-core-web-api-tutorial
Packt Editorial Staff
23 Jul 2018
13 min read
Save for later

Get to know ASP.NET Core Web API [Tutorial]

Packt Editorial Staff
23 Jul 2018
13 min read
ASP.NET Web API is a framework that makes it easy to build HTTP services that reach a broad range of clients, including browsers and mobile devices. ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. In today’s post we shall be looking at the following topics: Quick recap of MVC framework Why Web APIs were incepted and it's evolution? Introduction to .NET Core? Overview of ASP.NET Core architecture This article is an extract from the book Mastering ASP.NET Web API written by Mithun Pattankar and Malendra Hurbuns. Quick recap of MVC framework Model-View-Controller (MVC) is a powerful and elegant way of separating concerns within an application and applies itself extremely well to web applications. With ASP.NETMVC, it's translated roughly as follows: Models (M): These are the classes that represent the domain you are interested in. These domain objects often encapsulate data stored in a database as well as code that manipulates the data and enforces domain-specific business logic. With ASP.NETMVC, this is most likely a Data Access Layer of some kind, using a tool like Entity Framework or NHibernate or classic ADO.NET. View (V): This is a template to dynamically generate HTML. Controller(C): This is a special class that manages the relationship between the View and the Model. It responds to user input, talks to the Model, and decides which view to render (if any). In ASP.NETMVC, this class is conventionally denoted by the suffix Controller. Why Web APIs were incepted and it's evolution? Looking back to days when ASP.NETASMX-based XML web service was widely used for building service-oriented applications, it was easiest way to create SOAP-based service which can be used by both .NET applications and non .NET applications. It was available only over HTTP. Around 2006, Microsoft released Windows Communication Foundation (WCF).WCF was and even now a powerful technology for building SOA-based applications. It was giant leap in the world of Microsoft .NET world. WCF was flexible enough to be configured as HTTP service, Remoting service, TCP service, and so on. Using Contracts of WCF, we would keep entire business logic code base same and expose the service as HTTP based or non HTTP based via SOAP/ non SOAP. Until 2010 the ASMX based XML web service or WCF service were widely used in client server based applications, in-fact everything was running smoothly. But the developers of .NET or non .NET community started to feel need for completely new SOA technology for client server applications. Some of reasons behind them were as follows: With applications in production, the amount of data while communicating started to explode and transferring them over the network was bandwidth consuming. SOAP being light weight to some extent started to show signs of payload increase. A few KB SOAP packets were becoming few MBs of data transfer. Consuming the SOAP service in applications lead to huge applications size because of WSDL and proxy generation. This was even worse when it was used in web applications. Any changes to SOAP services lead to repeat of consuming them by proxy generation. This wasn't easy task for any developers. JavaScript-based web frameworks were getting released and gaining ground for much simpler way of web development. Consuming SOAP-based services were not that optimal way. Hand-held devices were becoming popular like tablets, smartphones. They had more focused applications and needed very lightweight service oriented approach. Browser based Single Page Applications (SPA) was gaining ground very rapidly. Using SOAP based services for quite heavy for these SPA. Microsoft released REST based WCF components which can be configured to respond in JSON or XML, but even then it was WCF which was heavy technology to be used. Applications where no longer just large enterprise services, but there was need was more focused light weight service to be up & running in few days and much easier to use. Any developer who has seen evolving nature of SOA based technologies like ASMX, WCF or any SOAP based felt the need to have much lighter, HTTP based services. HTTP only, JSON compatible POCO based lightweight services was need of the hour and concept of Web API started gaining momentum. What is Web API? A Web API is a programmatic interface to a system that is accessed via standard HTTP methods and headers. A Web API can be accessed by a variety of HTTP clients, including browsers and mobile devices. For Web API to be successful HTTP based service, it needed strong web infrastructure like hosting, caching, concurrency, logging, security etc. One of the best web infrastructure was none other than ASP.NET. ASP.NET either in form Web Form or MVC was widely adopted, so the solid base for web infrastructure was mature enough to be extended as Web API. Microsoft responded to community needs by creating ASP.NET Web API- a super-simple yet very powerful framework for building HTTP-only, JSON-by-default web services without all the fuss of WCF. ASP.NET Web API can be used to build REST based services in matter of minutes and can easily consumed with any front end technologies. It used IIS (mostly) for hosting, caching, concurrency etc. features, it became quite popular. It was launched in 2012 with most basics needs for HTTP based services like convention-based Routing, HTTP Request and Response messages. Later Microsoft released much bigger and better ASP.NET Web API 2 along with ASP.NETMVC 5 in Visual Studio 2013. ASP.NET Web API 2 evolved at much faster pace with these features. Installed via NuGet Installing of Web API 2 was made simpler by using NuGet, either create empty ASP.NET or MVC project and then run command in NuGet Package Manager Console: Install-Package Microsoft.AspNet.WebApi Attribute Routing Initial release of Web API was based on convention-based routing meaning we define one or more route templates and work around it. It's simple without much fuss as routing logic in a single place & it's applied across all controllers. The real world applications are more complicated with resources (controllers/ actions) have child resources like customers having orders, books having authors etc. In such cases convention-based routing is not scalable. Web API 2 introduced a new concept of Attribute Routing which uses attributes in programming languages to define routes. One straight forward advantage is developer has full controls how URIs for Web API are formed. Here is quick snippet of Attribute Routing: [Route("customers/{customerId}/orders")] public IEnumerable<Order>GetOrdersByCustomer(intcustomerId) { ... } For more understanding on this, read Attribute Routing in ASP.NET Web API 2 (https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2) OWIN self-host ASP.NET Web API lives on ASP.NET framework, leading to think that it can be hosted on IIS only. The Web API 2 came new hosting package. Microsoft.AspNet.WebApi.OwinSelfHost With this package it can self-hosted outside IIS using OWIN/Katana. CORS (Cross Origin Resource Sharing) Any Web API developed either using .NET or non .NET technologies and meant to be used across different web frameworks, then enabling CORS is must. A must read on CORS&ASP.NET Web API 2 (https://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api). IHTTPActionResult and Web API OData improvements are other few notable features which helped evolve Web API 2 as strong technology for developing HTTP based services. ASP.NET Web API 2 has becoming more powerful over the years with C# language improvements like Asynchronous programming using Async/ Await, LINQ, Entity Framework Integration, Dependency Injection with DI frameworks, and so on. ASP.NET into Open Source world Every technology has to evolve with growing needs and advancements in hardware, network and software industry, ASP.NET Web API is no exception to that. Some of the evolution that ASP.NET Web API should undergo from perspectives of developer community, enterprises and end users are: NETMVC and Web API even though part of ASP.NET stack but their implementation and code base is different. A unified code base reduces burden of maintaining them. It's known that Web API's are consumed by various clients like web applications, Native apps, and Hybrid apps, desktop applications using different technologies (.NET or non .NET). But how about developing Web API in cross platform way, need not rely always on Windows OS/ Visual Studio IDE. Open sourcing the ASP.NET stack so that it's adopted on much bigger scale. End users are benefitted with open source innovations. We saw that why Web APIs were incepted, how they evolved into powerful HTTP based service and some evolutions required. With these thoughts Microsoft made an entry into world of Open Source by launching .NET Core and ASP.NET Core 1.0. What is .NET Core? .NET Core is a cross-platform free and open-source managed software framework similar to .NET Framework. It consists of CoreCLR, a complete cross-platform runtime implementation of CLR. .NET Core 1.0 was released on 27 June 2016 along with Visual Studio 2015 Update 3, which enables .NET Core development. In much simpler terms .NET Core applications can be developed, tested, deployed on cross platforms such as Windows, Linux flavours, macOS systems. With help of .NET Core, we don't really need Windows OS and in particular Visual Studio IDE to develop ASP.NET web applications, command-line apps, libraries, and UWP apps. In short, let's understand .NET Core components: CoreCLR:It is a virtual machine that manages the execution of .NET programs. CoreCLRmeans Core Common Language Runtime, it includes the garbage collector, JIT compiler, base .NET data types and many low-level classes. CoreFX: .NET Core foundational libraries likes class for collections, file systems, console, XML, Async and many others. CoreRT: .NET Core runtime optimized for AOT (ahead of time compilation) scenarios, with the accompanying .NET Native compiler toolchain. Its main responsibility is to do native compilation of code written in any of our favorite .NET programming language. .NET Core shares subset of original .NET framework, plus it comes with its own set of APIs that is not part of .NET framework. This results in some shared APIs that can be used by both .NET core and .NET framework. A .Net Core application can easily work on existing .NET Framework but not vice versa. .NET Core provides a CLI (Command Line Interface) for an execution entry point for operating systems and provides developer services like compilation and package management. The following are the .NET Core interesting points to know: .NET Core can be installed on cross platforms like Windows, Linux, andmacOS. It can be used in device, cloud, and embedded/IoT scenarios. Visual Studio IDE is not mandatory to work with .NET Core, but when working on Windows OS we can leverage existing IDE knowledge to work. .NET Core is modular, meaning that instead of assemblies, developers deal with NuGet packages. .NET Core relies on its package manager to receive updates because cross platform technology can't rely on Windows Updates. To learn .NET Core, we just need a shell, text editor and its runtime installed. .NET Core comes with flexible deployment. It can be included in your app or installed side-by-side user- or machine-wide. .NET Core apps can also be self-hosted/run as standalone apps. .NET Core supports four cross-platform scenarios--ASP.NET Core web apps, command-line apps, libraries, and Universal Windows Platform apps. It does not implement Windows Forms or WPF which render the standard GUI for desktop software on Windows. At present only C# programming language can be used to write .NET Core apps. F# and VB support are on the way. We will primarily focus on ASP.NET Core web apps which includes MVC and Web API. CLI apps, libraries will be covered briefly. What is ASP.NET Core? A new open-source and cross-platform framework for building modern cloud-based web applications using .NET. ASP.NET Core is completely open-source, you can download it from GitHub. It's cross platform meaning you can develop ASP.NET Core apps on Linux/macOS and of course on Windows OS. ASP.NET was first released almost 15 years back with .NET framework. Since then it's adopted by millions of developers for large, small applications. ASP.NET has evolved with many capabilities. With .NET Core as cross platform, ASP.NET took a huge leap beyond boundaries of Windows OS environment for development and deployment of web applications. ASP.NET Core overview ASP.NET Core high level overview provides following insights: NET Core runs both on Full .NET framework and .NET Core. NET Core applications with full .NET framework can only be developed and deployed only Windows OS/Server. When using .NET core, it can be developed and deployed on platform of choice. The logos of Windows, Linux, macOSindicates that you can work with ASP.NET Core. NET Core when on non-Windows machine, use the .NET Core libraries to run the applications. It's obvious you won't have all full .NET libraries but most of them are available. Developers working on ASP.NET Core can easily switch working on any machine not confined to Visual Studio 2015 IDE. NET Core can run with different version of .NET Core. ASP.NET Core has much more foundational improvements apart from being cross-platform, we gain following advantages of using ASP.NET Core: Totally Modular: ASP.NET Core takes totally modular approach for application development, every component needed to build application are well factored into NuGet packages. Only add required packages through NuGet to keep overall application lightweight. NET Core is no longer based on System.Web.dll. Choose your editors and tools: Visual Studio IDE was used to develop ASP.NET applications on Windows OS box, now since we have moved beyond the Windows world. Then we will require IDE/editors/ Tools required for developingASP.NET applications on Linux/macOS. Microsoft developed powerful lightweight code editors for almost any type of web applications called as Visual Studio Code. NET Core is such a framework that we don't need Visual Studio IDE/ code to develop applications. We can use code editors like Sublime, Vim also. To work with C# code in editors, installed and use OmniSharp plugin. OmniSharp is a set of tooling, editor integrations and libraries that together create an ecosystem that allows you to have a great programming experience no matter what your editor and operating system of choice may be. Integration with modern web frameworks: ASP.NET Core has powerful, seamless integration with modern web frameworks like Angular, Ember, NodeJS, and Bootstrap. Using bower andNPM, we can work with modern web frameworks. Cloud ready: ASP.NET Core apps are cloud ready with configuration system, it just seamlessly gets transitioned from on-premises to cloud. Built in Dependency Injection. Can be hosted on IIS or self-host in your own process or on nginx. New light-weight and modular HTTP request pipeline. Unified code base for Web UI and Web APIs. We will see more on this when we explore anatomy of ASP.NET Core application. To summarize, we covered MVC framework and introduced .NET Core and its architecture. You can leverage ASP.Net Web API to build professional web services and create powerful applications check out this book, Mastering ASP.NET Web API written by Mithun Pattankar and Malendra Hurbuns. What is ASP.NET Core? Why ASP.NET makes building apps for mobile and web easy – Interview with Jason de Oliveira How to call an Azure function from an ASP.NET Core MVC application  
Read more
  • 0
  • 0
  • 32002

article-image-the-developer-tester-face-off-needs-to-end
Aaron Lazar
21 Jul 2018
6 min read
Save for later

The developer-tester face-off needs to end. It's putting our projects at risk.

Aaron Lazar
21 Jul 2018
6 min read
Penny and Leonard work at the same company as a tester and developer respectively. Penny arrives home late, to find Leonard on the couch with his legs up on the table, playing his favourite video game. Leonard: Oh hi sweety, it looks like you had a long day at work. Penny, throwing him a hostile, sideways glance, heads over to the refrigerator. Penny: Did you remember to take out the garbage? Leonard: Of course, sweety. I used 2 bags so Sheldon’s Szechuan sauce from Szechuan Palace doesn’t seep through. Penny: Did you buy new shampoo for the bathroom? Leonard: Yes, I picked up your regular one from the store on the way back. Penny: And did you slap on a last minute field on the SPA at work? Leonard, pausing his video game and answering in a soft, high pitched voice: Whaaaaaat? Source: giphy If you’re a developer or a tester, you’ve probably been in this situation at least once, if not more. Even if your husband or wife might not be on the other side of the source code. The war goes on... The funny thing is that this isn’t something that’s happened in the recent past. The war between Developers and Testers is a long standing, unresolved battle, that is usually brought up in bouts of unnecessary humor. The truth is that this battle is the cause of several projects slipping deadlines, teams not respecting each other’s views, etc. Here we’ll discuss some of the main reasons for this disconnect and try and address them in hope of making the office a better place. #1 You talkin’ to me? One of the main reasons that developers and testers are not on the same page is because neither bother to communicate effectively with the other. Each individual considers informing the other about the strategy/techniques used, a waste of effort. Obviously there are bound to be issues arising with such a disjointed team. The only way to resolve this problem is to toss egos out of the window, sit down and resolve problems like professionals. While tickets might be the most professional and efficient way to resolve things, walking up to the person (if possible), and discussing the best way forward lets you build a relationship, and resolve things more effectively. Moreover, the person on the receiving end will not consider the move offensive, or demeaning. #2 Is it ‘team’ or ‘teams’? You know the answer to this one, but you’re still not willing to accept it. IT managers and team leads need to create an environment in which developers and testers are not two separate teams. Rather, consider them all as engineers working in the same team, towards the same goal! There’s no better recipe to meet success. Use modern methods like Mob or Pair Programming, where both developers and testers work together closely. The ideal scenario would be to possibly have both team members work on the same machine, addressing and strategising to achieve the goal with continuous, real-time feedback. A good pairing station, Source: ministry of testing #3 On the same page? Which book you got there? If you’re a developer, this one’s especially for you, so listen carefully! Most developers aren’t aware of what tools the testers in their teams use, which is a sin. Being aware of testing tools, methodologies and processes, goes a long way in enabling a smooth and speedy testing process. A developer will be able to understand which parts of their code can probably be a tester’s target, what changes would give testers a tough time and on the other hand, what makes it easy. #4 One goal, two paths to achieve it Well, this is true a lot of times. Developers aim to “build” an application. Testers on the other hand aim to “break” the application. Now while this is not wrong, it’s the vision with which the tester is actually planning on breaking things. Testers, you should always keep the customer’s or end user’s requirements clear, while approaching the application. I may not be an actual tester, and you might wonder how I can empathise with other testers. Honestly, I don’t build software, neither do I test any. But I’ve been in a very similar role, earlier on in my Publishing career. As a Commissioning Gatekeeper, I was responsible for validating book and video ideas from the Commissioning Editors. Like a tester, my job was to identify why or how something wouldn’t work in the market. Now, I could easily approach a particular book idea from the perspective of ‘trashing’ it. But when I learned to approach it from the customer’s point of view, my perspective changed, and I was able to give better constructive feedback to the editor. Don’t aim to destroy, aim to improve. If you must kill off an idea or a feature, do it firmly but with kindness. #5 Trust the Developer’s testing skills Yes! Lack of trust is one of the main reasons why there’s so much friction between developers and testers. A tester needs to understand the developer and believe that they can also write tests with a clear goal in mind. Test Driven Development is a great approach to follow. Here, the developer will know better, what angles to test from, and this can help the tester write a mutually defined test case for the developer to run. At the same time, the tester can also provide insight into how to address bugs that might creep up while running the tests. With this combined knowledge, the developers will be able to minimize the number of bugs at the first go itself! Toss in a Business-Driven Development approach, and you’ve got yourself a team that delivers user stories that are more aligned to the business requirement than ever before! In the end, developers and testers, both need to set their egos aside and make peace with each other. If you really look at it, it’s not that hard at all. It’s all about how the two collaborate to create better software, rather than working in silos. IT managers can play an important role here, and they need to understand the advantages and limitations of their team. They need to ensure the unity of the team by encouraging more engaging ways of working as well as introducing modern methodologies that would assist a peaceful, collaborative effort. Why does more than half the IT industry suffer from Burnout? Abandoning Agile Unit testing with Java frameworks: JUnit and TestNG [Tutorial]
Read more
  • 0
  • 0
  • 13465

article-image-optical-training-of-neural-networks-is-making-ai-more-efficient
Natasha Mathur
20 Jul 2018
3 min read
Save for later

Optical training of Neural networks is making AI more efficient

Natasha Mathur
20 Jul 2018
3 min read
According to research conducted by T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, artificial neural networks can be directly trained on an optical chip. The research, titled “Training of photonic neural networks through in situ backpropagation and gradient measurement” demonstrates that an optical circuit has all the capabilities to perform the critical functions of an electronics-based artificial neural network. This makes performing complex tasks like speech or image recognition less expensive, faster and more energy efficient. According to research team leader, Shanhui Fan of Stanford University "Using an optical chip to perform neural network computations more efficiently than is possible with digital computers could allow more complex problems to be solved”. During the research, the training step on optical ANNs was performed using a traditional digital computer. The final settings were then imported into the optical circuit. But, according to Optica (the Optical Society journal for high impact research at Stanford),. there is a more direct method for training these networks. This involves making use of an optical analog within the ‘backpropagation' algorithm. Tyler W. Hughes, the first author of the research paper, states that "using a physical device rather than a computer model for training makes the process more accurate”.  He also mentions that “because the training step is a very computationally expensive part of the implementation of the neural network, performing this step optically is key to improving the computational efficiency, speed and power consumption of artificial networks." Neural network processing is usually performed with the help of a traditional computer. But now, for neural network computing, researchers are interested in Optics-based devices as computations performed on these devices use much less energy compared to electronic devices. In New York researchers designed an optical chip that imitates the way, conventional computers train neural networks. This then provides a way of implementing an all-optical neural network. According to Hughes, the ANN is like a black box with a number of knobs. During the training stage, each knob is turned ever so slightly so the system can be tested to see how the algorithm’s performance changes. He says, “Our method not only helps predict which direction to turn the knobs but also how much you should turn each knob to get you closer to the desired performance”. How does the new training protocol work? This new training method uses optical circuits which have tunable beam splitters. You can adjust these spitters by altering the settings of optical phase shifters. First, you feed a laser which is encoded with information that needs to be processed through the optical circuit. Once the laser exits the device, the difference against the expected outcome is calculated. This information that is collected then generates a new light signal through the optical network in the opposite direction. Researchers also showed that neural network performance changes with respect to each beam splitter's setting. You can also change the phase shifter settings based on this information. The whole process is repeated until the desired outcome is produced by the neural network. This training technique has been further tested by researchers using optical simulations. In these tests, the optical implementation performed similarly to a conventional computer. The researchers are planning to further optimize the system in order to come out with a practical application using a neural network. How Deep Neural Networks can improve Speech Recognition and generation Recurrent neural networks and the LSTM architecture  
Read more
  • 0
  • 0
  • 18045
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-create-an-rnn-based-python-machine-translation-system-tutorial
Sunith Shetty
20 Jul 2018
22 min read
Save for later

Create an RNN based Python machine translation system [Tutorial]

Sunith Shetty
20 Jul 2018
22 min read
Machine translation is a process which uses neural network techniques to automatically translate text from one language to the another, with no human intervention required. In today’s machine learning tutorial, we will understand the architecture and learn how to train and build your own machine translation system. This project will help us automatically translate German to produce English sentences. This article is an excerpt from a book written by Luca Massaron, Alberto Boschetti,  Alexey Grigorev, Abhishek Thakur, and Rajalingappaa Shanmugamani titled TensorFlow Deep Learning Projects. Walkthrough of the architecture A machine translation system receives as input an arbitrary string in one language and produces, as output, a string with the same meaning but in another language. Google Translate is one example (but also many other main IT companies have their own). There, users are able to translate to and from more than 100 languages. Using the webpage is easy: on the left just put the sentence you want to translate (for example, Hello World), select its language (in the example, it's English), and select the language you want it to be translated to. Here's an example where we translate the sentence Hello World to French: Is it easy? At a glance, we may think it's a simple dictionary substitution. Words are chunked, the translation is looked up on the specific English-to-French dictionary, and each word is substituted with its translation. Unfortunately, that's not the case. In the example, the English sentence has two words, while the French one has three. More generically, think about phrasal verbs (turn up, turn off, turn on, turn down), Saxon genitive, grammatical gender, tenses, conditional sentences... they don't always have a direct translation, and the correct one should follow the context of the sentence. That's why, for doing machine translation, we need some artificial intelligence tools. Specifically, as for many other natural language processing (NLP) tasks, we'll be using recurrent neural networks (RNNs).  The main feature they have is that they work on sequences: given an input sequence, they produce an output sequence. The objective of this article is to create the correct training pipeline for having a sentence as the input sequence, and its translation as the output one. Remember also the no free lunch theorem: this process isn't easy, and more solutions can be created with the same result. Here, in this article, we will propose a simple but powerful one. First of all, we start with the corpora: it's maybe the hardest thing to find since it should contain a high fidelity translation of many sentences from a language to another one. Fortunately, NLTK, a well-known package of Python for NLP, contains the corpora Comtrans. Comtrans is the acronym of combination approach to machine translation and contains an aligned corpus for three languages: German, French, and English. In this project, we will use these corpora for a few reasons, as follows: It's easy to download and import in Python. No preprocessing is needed to read it from disk / from the internet. NLTK already handles that part. It's small enough to be used on many laptops (a few dozen thousands sentences). It's freely available on the internet. For more information about the Comtrans project, go to http://www.fask.uni-mainz.de/user/rapp/comtrans/. More specifically, we will try to create a machine translation system to translate German to English. We picked these two languages at random among the ones available in the Comtrans corpora: feel free to flip them, or use the French corpora instead. The pipeline of our project is generic enough to handle any combination. Let's now investigate how the corpora is organized by typing some commands: from nltk.corpus import comtrans print(comtrans.aligned_sents('alignment-de-en.txt')[0]) The output is as follows: <AlignedSent: 'Wiederaufnahme der S...' -> 'Resumption of the se...'> The pairs of sentences are available using the function aligned_sents. The filename contains the from and to language. In this case, as for the following part of the project, we will translate German (de) to English (en). The returned object is an instance of the class nltk.translate.api.AlignedSent. By looking at the documentation, the first language is accessible with the attribute words, while the second language is accessible with the attribute mots. So, to extract the German sentence and its English translation separately, we should run: print(comtrans.aligned_sents()[0].words) print(comtrans.aligned_sents()[0].mots) The preceding code outputs: ['Wiederaufnahme', 'der', 'Sitzungsperiode'] ['Resumption', 'of', 'the', 'session'] How nice! The sentences are already tokenized, and they look as sequences. In fact, they will be the input and (hopefully) the output of the RNN which will provide the service of machine translation from German to English for our project. Furthermore, if you want to understand the dynamics of the language, Comtrans makes available the alignment of the words in the translation: print(comtrans.aligned_sents()[0].alignment) The preceding code outputs: 0-0 1-1 1-2 2-3 The first word in German is translated to the first word in English (Wiederaufnahme to Resumption), the second to the second (der to both of and the), and the third (at index 1) is translated with the fourth (Sitzungsperiode to session). Pre-processing of the corpora The first step is to retrieve the corpora. We've already seen how to do this, but let's now formalize it in a function. To make it generic enough, let's enclose these functions in a file named corpora_tools.py. Let's do some imports that we will use later on: import pickle import re from collections import Counter from nltk.corpus import comtrans Now, let's create the function to retrieve the corpora: def retrieve_corpora(translated_sentences_l1_l2='alignment-de-en.txt'): print("Retrieving corpora: {}".format(translated_sentences_l1_l2)) als = comtrans.aligned_sents(translated_sentences_l1_l2) sentences_l1 = [sent.words for sent in als] sentences_l2 = [sent.mots for sent in als] return sentences_l1, sentences_l2 This function has one argument; the file containing the aligned sentences from the NLTK Comtrans corpora. It returns two lists of sentences (actually, they're a list of tokens), one for the source language (in our case, German), the other in the destination language (in our case, English). On a separate Python REPL, we can test this function: sen_l1, sen_l2 = retrieve_corpora() print("# A sentence in the two languages DE & EN") print("DE:", sen_l1[0]) print("EN:", sen_l2[0]) print("# Corpora length (i.e. number of sentences)") print(len(sen_l1)) assert len(sen_l1) == len(sen_l2) The preceding code creates the following output: Retrieving corpora: alignment-de-en.txt # A sentence in the two languages DE & EN DE: ['Wiederaufnahme', 'der', 'Sitzungsperiode'] EN: ['Resumption', 'of', 'the', 'session'] # Corpora length (i.e. number of sentences) 33334 We also printed the number of sentences in each corpora (33,000) and asserted that the number of sentences in the source and the destination languages is the same. In the following step, we want to clean up the tokens. Specifically, we want to tokenize punctuation and lowercase the tokens. To do so, we can create a new function in corpora_tools.py. We will use the regex module to perform the further splitting tokenization: def clean_sentence(sentence): regex_splitter = re.compile("([!?.,:;$"')( ])") clean_words = [re.split(regex_splitter, word.lower()) for word in sentence] return [w for words in clean_words for w in words if words if w] Again, in the REPL, let's test the function: clean_sen_l1 = [clean_sentence(s) for s in sen_l1] clean_sen_l2 = [clean_sentence(s) for s in sen_l2] print("# Same sentence as before, but chunked and cleaned") print("DE:", clean_sen_l1[0]) print("EN:", clean_sen_l2[0]) The preceding code outputs the same sentence as before, but chunked and cleaned: DE: ['wiederaufnahme', 'der', 'sitzungsperiode'] EN: ['resumption', 'of', 'the', 'session'] Nice! The next step for this project is filtering the sentences that are too long to be processed. Since our goal is to perform the processing on a local machine, we should limit ourselves to sentences up to N tokens. In this case, we set N=20, in order to be able to train the learner within 24 hours. If you have a powerful machine, feel free to increase that limit. To make the function generic enough, there's also a lower bound with a default value set to 0, such as an empty token set. The logic of the function is very easy: if the number of tokens for a sentence or its translation is greater than N, then the sentence (in both languages) is removed: def filter_sentence_length(sentences_l1, sentences_l2, min_len=0, max_len=20): filtered_sentences_l1 = [] filtered_sentences_l2 = [] for i in range(len(sentences_l1)): if min_len <= len(sentences_l1[i]) <= max_len and min_len <= len(sentences_l2[i]) <= max_len: filtered_sentences_l1.append(sentences_l1[i]) filtered_sentences_l2.append(sentences_l2[i]) return filtered_sentences_l1, filtered_sentences_l2 Again, let's see in the REPL how many sentences survived this filter. Remember, we started with more than 33,000: filt_clean_sen_l1, filt_clean_sen_l2 = filter_sentence_length(clean_sen_l1, clean_sen_l2) print("# Filtered Corpora length (i.e. number of sentences)") print(len(filt_clean_sen_l1)) assert len(filt_clean_sen_l1) == len(filt_clean_sen_l2) The preceding code prints the following output: # Filtered Corpora length (i.e. number of sentences) 14788 Almost 15,000 sentences survived, that is, half of the corpora. Now, we finally move from text to numbers (which AI mainly uses). To do so, we shall create a dictionary of the words for each language. The dictionary should be big enough to contain most of the words, though we can discard some if the language has words with low occourrence. This is a common practice even in the tf-idf (term frequency within a document, multiplied by the inverse of the document frequency, i.e. in how many documents that token appears), where very rare words are discarded to speed up the computation, and make the solution more scalable and generic. We need here four special symbols in both dictionaries: One symbol for padding (we'll see later why we need it) One symbol for dividing the two sentences One symbol to indicate where the sentence stops One symbol to indicate unknown words (like the very rare ones) For doing so, let's create a new file named data_utils.py containing the following lines of code: _PAD = "_PAD" _GO = "_GO" _EOS = "_EOS" _UNK = "_UNK" _START_VOCAB = [_PAD, _GO, _EOS, _UNK] PAD_ID = 0 GO_ID = 1 EOS_ID = 2 UNK_ID = 3 OP_DICT_IDS = [PAD_ID, GO_ID, EOS_ID, UNK_ID] Then, back to the corpora_tools.py file, let's add the following function: import data_utils def create_indexed_dictionary(sentences, dict_size=10000, storage_path=None): count_words = Counter() dict_words = {} opt_dict_size = len(data_utils.OP_DICT_IDS) for sen in sentences: for word in sen: count_words[word] += 1 dict_words[data_utils._PAD] = data_utils.PAD_ID dict_words[data_utils._GO] = data_utils.GO_ID dict_words[data_utils._EOS] = data_utils.EOS_ID dict_words[data_utils._UNK] = data_utils.UNK_ID for idx, item in enumerate(count_words.most_common(dict_size)): dict_words[item[0]] = idx + opt_dict_size if storage_path: pickle.dump(dict_words, open(storage_path, "wb")) return dict_words This function takes as arguments the number of entries in the dictionary and the path of where to store the dictionary. Remember, the dictionary is created while training the algorithms: during the testing phase it's loaded, and the association token/symbol should be the same one as used in the training. If the number of unique tokens is greater than the value set, only the most popular ones are selected. At the end, the dictionary contains the association between a token and its ID for each language. After building the dictionary, we should look up the tokens and substitute them with their token ID. For that, we need another function: def sentences_to_indexes(sentences, indexed_dictionary): indexed_sentences = [] not_found_counter = 0 for sent in sentences: idx_sent = [] for word in sent: try: idx_sent.append(indexed_dictionary[word]) except KeyError: idx_sent.append(data_utils.UNK_ID) not_found_counter += 1 indexed_sentences.append(idx_sent) print('[sentences_to_indexes] Did not find {} words'.format(not_found_counter)) return indexed_sentences This step is very simple; the token is substituted with its ID. If the token is not in the dictionary, the ID of the unknown token is used. Let's see in the REPL how our sentences look after these steps: dict_l1 = create_indexed_dictionary(filt_clean_sen_l1, dict_size=15000, storage_path="/tmp/l1_dict.p") dict_l2 = create_indexed_dictionary(filt_clean_sen_l2, dict_size=10000, storage_path="/tmp/l2_dict.p") idx_sentences_l1 = sentences_to_indexes(filt_clean_sen_l1, dict_l1) idx_sentences_l2 = sentences_to_indexes(filt_clean_sen_l2, dict_l2) print("# Same sentences as before, with their dictionary ID") print("DE:", list(zip(filt_clean_sen_l1[0], idx_sentences_l1[0]))) This code prints the token and its ID for both the sentences. What's used in the RNN will be just the second element of each tuple, that is, the integer ID: # Same sentences as before, with their dictionary ID DE: [('wiederaufnahme', 1616), ('der', 7), ('sitzungsperiode', 618)] EN: [('resumption', 1779), ('of', 8), ('the', 5), ('session', 549)] Please also note how frequent tokens, such as the and of in English, and der in German, have a low ID. That's because the IDs are sorted by popularity (see the body of the function create_indexed_dictionary). Even though we did the filtering to limit the maximum size of the sentences, we should create a function to extract the maximum size. For the lucky owners of very powerful machines, which didn't do any filtering, that's the moment to see how long the longest sentence in the RNN will be. That's simply the function: def extract_max_length(corpora): return max([len(sentence) for sentence in corpora]) Let's apply the following to our sentences: max_length_l1 = extract_max_length(idx_sentences_l1) max_length_l2 = extract_max_length(idx_sentences_l2) print("# Max sentence sizes:") print("DE:", max_length_l1) print("EN:", max_length_l2) As expected, the output is: # Max sentence sizes: DE: 20 EN: 20 The final preprocessing step is padding. We need all the sequences to be the same length, therefore we should pad the shorter ones. Also, we need to insert the correct tokens to instruct the RNN where the string begins and ends. Basically, this step should: Pad the input sequences, for all being 20 symbols long Pad the output sequence, to be 20 symbols long Insert an _GO at the beginning of the output sequence and an _EOS at the end to position the start and the end of the translation This is done by this function (insert it in the corpora_tools.py): def prepare_sentences(sentences_l1, sentences_l2, len_l1, len_l2): assert len(sentences_l1) == len(sentences_l2) data_set = [] for i in range(len(sentences_l1)): padding_l1 = len_l1 - len(sentences_l1[i]) pad_sentence_l1 = ([data_utils.PAD_ID]*padding_l1) + sentences_l1[i] padding_l2 = len_l2 - len(sentences_l2[i]) pad_sentence_l2 = [data_utils.GO_ID] + sentences_l2[i] + [data_utils.EOS_ID] + ([data_utils.PAD_ID] * padding_l2) data_set.append([pad_sentence_l1, pad_sentence_l2]) return data_set To test it, let's prepare the dataset and print the first sentence: data_set = prepare_sentences(idx_sentences_l1, idx_sentences_l2, max_length_l1, max_length_l2) print("# Prepared minibatch with paddings and extra stuff") print("DE:", data_set[0][0]) print("EN:", data_set[0][1]) print("# The sentence pass from X to Y tokens") print("DE:", len(idx_sentences_l1[0]), "->", len(data_set[0][0])) print("EN:", len(idx_sentences_l2[0]), "->", len(data_set[0][1])) The preceding code outputs the following: # Prepared minibatch with paddings and extra stuff DE: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1616, 7, 618] EN: [1, 1779, 8, 5, 549, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # The sentence pass from X to Y tokens DE: 3 -> 20 EN: 4 -> 22 As you can see, the input and the output are padded with zeros to have a constant length (in the dictionary, they correspond to _PAD, see data_utils.py), and the output contains the markers 1 and 2 just before the start and the end of the sentence. As proven effective in the literature, we're going to pad the input sentences at the start and the output sentences at the end. After this operation, all the input sentences are 20 items long, and the output sentences 22. Training the machine translator So far, we've seen the steps to preprocess the corpora, but not the model used. The model is actually already available on the TensorFlow Models repository, freely downloadable from https://github.com/tensorflow/models/blob/master/tutorials/rnn/translate/seq2seq_model.py. The piece of code is licensed with Apache 2.0. We really thank the authors for having open sourced such a great model. Copyright 2015 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the License); You may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software. Distributed under the License is distributed on an AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. We will see the usage of the model throughout this section. First, let's create a new file named train_translator.py and put in some imports and some constants. We will save the dictionary in the /tmp/ directory, as well as the model and its checkpoints: import time import math import sys import pickle import glob import os import tensorflow as tf from seq2seq_model import Seq2SeqModel from corpora_tools import * path_l1_dict = "/tmp/l1_dict.p" path_l2_dict = "/tmp/l2_dict.p" model_dir = "/tmp/translate " model_checkpoints = model_dir + "/translate.ckpt" Now, let's use all the tools created in the previous section within a function that, given a Boolean flag, returns the corpora. More specifically, if the argument is False, it builds the dictionary from scratch (and saves it); otherwise, it uses the dictionary available in the path: def build_dataset(use_stored_dictionary=False): sen_l1, sen_l2 = retrieve_corpora() clean_sen_l1 = [clean_sentence(s) for s in sen_l1] clean_sen_l2 = [clean_sentence(s) for s in sen_l2] filt_clean_sen_l1, filt_clean_sen_l2 = filter_sentence_length(clean_sen_l1, clean_sen_l2) if not use_stored_dictionary: dict_l1 = create_indexed_dictionary(filt_clean_sen_l1, dict_size=15000, storage_path=path_l1_dict) dict_l2 = create_indexed_dictionary(filt_clean_sen_l2, dict_size=10000, storage_path=path_l2_dict) else: dict_l1 = pickle.load(open(path_l1_dict, "rb")) dict_l2 = pickle.load(open(path_l2_dict, "rb")) dict_l1_length = len(dict_l1) dict_l2_length = len(dict_l2) idx_sentences_l1 = sentences_to_indexes(filt_clean_sen_l1, dict_l1) idx_sentences_l2 = sentences_to_indexes(filt_clean_sen_l2, dict_l2) max_length_l1 = extract_max_length(idx_sentences_l1) max_length_l2 = extract_max_length(idx_sentences_l2) data_set = prepare_sentences(idx_sentences_l1, idx_sentences_l2, max_length_l1, max_length_l2) return (filt_clean_sen_l1, filt_clean_sen_l2), data_set, (max_length_l1, max_length_l2), (dict_l1_length, dict_l2_length) This function returns the cleaned sentences, the dataset, the maximum length of the sentences, and the lengths of the dictionaries. Also, we need to have a function to clean up the model. Every time we run the training routine we need to clean up the model directory, as we haven't provided any garbage information. We can do this with a very simple function: def cleanup_checkpoints(model_dir, model_checkpoints): for f in glob.glob(model_checkpoints + "*"): os.remove(f) try: os.mkdir(model_dir) except FileExistsError: pass Finally, let's create the model in a reusable fashion: def get_seq2seq_model(session, forward_only, dict_lengths, max_sentence_lengths, model_dir): model = Seq2SeqModel( source_vocab_size=dict_lengths[0], target_vocab_size=dict_lengths[1], buckets=[max_sentence_lengths], size=256, num_layers=2, max_gradient_norm=5.0, batch_size=64, learning_rate=0.5, learning_rate_decay_factor=0.99, forward_only=forward_only, dtype=tf.float16) ckpt = tf.train.get_checkpoint_state(model_dir) if ckpt and tf.train.checkpoint_exists(ckpt.model_checkpoint_path): print("Reading model parameters from {}".format(ckpt.model_checkpoint_path)) model.saver.restore(session, ckpt.model_checkpoint_path) else: print("Created model with fresh parameters.") session.run(tf.global_variables_initializer()) return model This function calls the constructor of the model, passing the following parameters: The source vocabulary size (German, in our example) The target vocabulary size (English, in our example) The buckets (in our example is just one, since we padded all the sequences to a single size) The long short-term memory (LSTM) internal units size The number of stacked LSTM layers The maximum norm of the gradient (for gradient clipping) The mini-batch size (that is, how many observations for each training step) The learning rate The learning rate decay factor The direction of the model The type of data (in our example, we will use flat16, that is, float using 2 bytes) To make the training faster and obtain a model with good performance, we have already set the values in the code; feel free to change them and see how it performs. The final if/else in the function retrieves the model, from its checkpoint, if the model already exists. In fact, this function will be used in the decoder too to retrieve and model on the test set. Finally, we have reached the function to train the machine translator. Here it is: def train(): with tf.Session() as sess: model = get_seq2seq_model(sess, False, dict_lengths, max_sentence_lengths, model_dir) # This is the training loop. step_time, loss = 0.0, 0.0 current_step = 0 bucket = 0 steps_per_checkpoint = 100 max_steps = 20000 while current_step < max_steps: start_time = time.time() encoder_inputs, decoder_inputs, target_weights = model.get_batch([data_set], bucket) _, step_loss, _ = model.step(sess, encoder_inputs, decoder_inputs, target_weights, bucket, False) step_time += (time.time() - start_time) / steps_per_checkpoint loss += step_loss / steps_per_checkpoint current_step += 1 if current_step % steps_per_checkpoint == 0: perplexity = math.exp(float(loss)) if loss < 300 else float("inf") print ("global step {} learning rate {} step-time {} perplexity {}".format( model.global_step.eval(), model.learning_rate.eval(), step_time, perplexity)) sess.run(model.learning_rate_decay_op) model.saver.save(sess, model_checkpoints, global_step=model.global_step) step_time, loss = 0.0, 0.0 encoder_inputs, decoder_inputs, target_weights = model.get_batch([data_set], bucket) _, eval_loss, _ = model.step(sess, encoder_inputs, decoder_inputs, target_weights, bucket, True) eval_ppx = math.exp(float(eval_loss)) if eval_loss < 300 else float("inf") print(" eval: perplexity {}".format(eval_ppx)) sys.stdout.flush() The function starts by creating the model. Also, it sets some constants on the steps per checkpoints and the maximum number of steps. Specifically, in the code, we will save a model every 100 steps and we will perform no more than 20,000 steps. If it still takes too long, feel free to kill the program: every checkpoint contains a trained model, and the decoder will use the most updated one. At this point, we enter the while loop. For each step, we ask the model to get a minibatch of data (of size 64, as set previously). The method get_batch returns the inputs (that is, the source sequence), the outputs (that is, the destination sequence), and the weights of the model. With the method step, we run one step of the training. One piece of information returned is the loss for the current minibatch of data. That's all the training! To report the performance and store the model every 100 steps, we print the average perplexity of the model (the lower, the better) on the 100 previous steps, and we save the checkpoint. The perplexity is a metric connected to the uncertainty of the predictions: the more confident we're about the tokens, the lower will be the perplexity of the output sentence. Also, we reset the counters and we extract the same metric from a single minibatch of the test set (in this case, it's a random minibatch of the dataset), and performances of it are printed too. Then, the training process restarts again. As an improvement, every 100 steps we also reduce the learning rate by a factor. In this case, we multiply it by 0.99. This helps the convergence and the stability of the training. We now have to connect all the functions together. In order to create a script that can be called by the command line but is also used by other scripts to import functions, we can create a main, as follows: if __name__ == "__main__": _, data_set, max_sentence_lengths, dict_lengths = build_dataset(False) cleanup_checkpoints(model_dir, model_checkpoints) train() In the console, you can now train your machine translator system with a very simple command: $> python train_translator.py On an average laptop, without an NVIDIA GPU, it takes more than a day to reach a perplexity below 10 (12+ hours). This is the output: Retrieving corpora: alignment-de-en.txt [sentences_to_indexes] Did not find 1097 words [sentences_to_indexes] Did not find 0 words Created model with fresh parameters. global step 100 learning rate 0.5 step-time 4.3573073434829713 perplexity 526.6638556683066 eval: perplexity 159.2240770935855 [...] global step 10500 learning rate 0.180419921875 step-time 4.35106209993362414 perplexity 2.0458043055629487 eval: perplexity 1.8646006006241982 [...] In this article, we've seen how to create a machine translation system based on an RNN. We've seen how to organize the corpus, and how to train it. To know more about how to test and translate the model, do checkout this book TensorFlow Deep Learning Projects. Google’s translation tool is now offline – and more powerful than ever thanks to AI Anatomy of an automated machine learning algorithm (AutoML) FAE (Fast Adaptation Engine): iOlite’s tool to write Smart Contracts using machine translation
Read more
  • 0
  • 1
  • 20135

article-image-why-guido-van-rossum-quit
Amey Varangaonkar
20 Jul 2018
7 min read
Save for later

Why Guido van Rossum quit as the Python chief (BDFL)

Amey Varangaonkar
20 Jul 2018
7 min read
It was the proverbial ‘end of an era’ for Python as Guido van Rossum stepped down as the Python chief, almost 3 decades since he created the programming language. It came as a shock to many Python users, and left a few bewildered. Many core developers thought this day might come, but they didn’t expect it to come so soon. However, looking at the post that Guido shared with the community, does this decision really come as a surprise? In this article, we dive deep into the possibilities and the circumstances that could’ve played a major role in van Rossum’s resignation. *Disclaimer: The views presented in this article are based purely on our research. They are not to be considered as inputs directly received from the Python community or Guido van Rossum himself. What can we make of Guido’s post? I’m pretty sure you’ve already read the mailing list post that Guido shared with the community last week. Aptly titled as ‘Transfer of Power’, the mail straightaway begins on a negative note: “Now that PEP 572 is done, I don't ever want to have to fight so hard for a PEP and find that so many people despise my decisions.” Some way to start a mail. The anger, disappointment and the tiredness is quite evident. Guido goes on to state that he would be removing himself from all the decision-making processes and will be available only for a while as a core developer and a mentor. From the tone of the mail, the three main reasons for his departure can be figured out quite easily: Guido felt there were questions around his decision-making and overall administration capabilities. The backlash on the PEP 572 is a testament to this. van Rossum is 62 now. Maybe the stress of leading this project for close to 30 years has finally taken a toll on his health, as he wryly talked about the piling medical issues. This is also quite evident from the last sentence of his mail: “I'm tired, and need a very long break” Guido thinks this is the right time for the baton to be passed over to the other core committers. He leaves everything for the core developers to figure out - from finalizing the PEPs (Python Enhancement Proposal) to deciding how the new core developers are inducted. Understanding the backlash behind PEP 572 For a mature language such as Python, you’d think there wouldn’t be much left to get excited about. However, a proposal to add a new feature to Python - PEP 572 - has caused a furore in the Python community in the last few months. What PEP 572 is all about The idea behind PEP 572 is quite simple - to allow assignment to variables within expressions. To make things simpler, consider the following lines of code in Python: a = b  - this is a simple assignment statement, while: a == b - this is a test for equality With PEP 572 comes a brand new operator := which is available in some other programming languages, and is an equivalent of the in-expression. So the way you would use this operator would be: while a:=b.read(10): print(a) Looks like a simple statement, isn’t it? Keep printing a while it is in a certain range of b. So what’s all the hue and cry about? In principle, the way := is used signifies that the value of an expression is assigned and returned to whatever code is using it, almost as if no assignment ever happened. This can get really tricky when complex expressions are involved. Ideally, an expression assignment is useful when one needs to retain the result of that expression while it is being used for some other purposes. The use of := is against this best practice, and has therefore led to many disagreements. The community response to PEP 572 Many Python users thought PEP 572 was a bad idea due to the reasons mentioned above. They did not hide their feelings regarding this too. In fact, some of the comments were quite brutal: Even some of the core developers were unhappy with this proposal, saying it did not fit the fundamental Python best practice, i.e. preference for simplicity over complexity. This practice is a part of the PEP 20, titled ‘The Zen of the Python’. As the Python BDFL, van Rossum personally signed off each PEP. This is in stark contrast to how other programming languages such as PHP finalize their proposals, i.e., by voting on them. On the PEP 572 objections, Guido’s response befitted that of a BDFL perfectly: Some developers still disagreed with this proposal, believing that it deviated from the standard best practices and rather reflected van Rossum’s preferred style of coding. So much so that van Rossum had to ask the committers to give him time to respond to the queries. Eventually the PEP 572 was accepted by Guido van Rossum, as he settled the matter with the following note: Thank you all. I will accept the PEP as is. I am happy to accept *clarification* updates to the PEP if people care to submit them as PRs to the peps repo, and that could even (to some extent) include summaries of discussion we've had, or outright rejected ideas. But even without any of those I think the PEP is very clear so I will not wait very long (maybe a week). Normally, in case of some other language, such an argument could have gone on forever, with both the sides reluctant to give in. The progress of the language would be stuck in a limbo as a result of this polarity. With Guido gone now, one cannot help but wonder if this is going to be case with Python going forward. Could van Rossum been pressurized less if he had adopted a consensus-based voting system to sign proposals off too? And if that was the case, would the proposal still have gone through an opposing majority of core developers? “Tired of the hatred” It would be wrong to say that the BDFL quit mainly because of how working on PEP 572 left a bitter taste in his mouth. However, it is fair to say that the negativity surrounding PEP 572 must’ve pushed van Rossum off the ledge finally. The fact that he thinks stepping down from his role as Python chief would mean people would not ‘despise his decisions’ - must’ve played a major role in his announcement. Guido’s decision to quit was rather an inevitable outcome of a series of past bad experiences accrued over the years with backlashes over his decisions on Python’s direction. Leading one of the most successful and long running open source projects in the world is no joke, and it brings more than its fair share of burden to carry. In many ways, CEOs of big tech companies have it easier. For starters, they’ve a lot of funding and they mainly worry about how to make their shareholders happy (make more money). More importantly, they aren’t directly exposed to the end users the way open source leaders are, for every decision they make. What’s next for Guido? Guido van Rossum isn’t going away for good. His mail states that he will still be around as a core dev, and as a mentor to other budding developers for some time. He says just wants to move away from the leadership role, away from all the responsibilities that once made him the BDFL. His tweet corroborates this: https://twitter.com/gvanrossum/status/1017546023227424768?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet Call him a dictator if you will, his contributions to Python cannot be taken away. From being a beginner’s coding language to being used in enterprise applications - Python’s rise under Van Rossum as one of the most popular and versatile programming languages in the world has been incredible. Perhaps the time was right for the sun to set, and the PEP 572 scenario and the circumstances surrounding it might just have given Guido the platform to ride away into the sunset. Read more Python founder resigns – Guido van Rossum goes ‘on a permanent vacation from being BDFL’ Top 7 Python programming books you need to read Python, Tensorflow, Excel and more – Data professionals reveal their top tools
Read more
  • 0
  • 2
  • 56495

article-image-html5-and-the-rise-of-modern-javascript-browser-apis-tutorial
Pavan Ramchandani
20 Jul 2018
15 min read
Save for later

HTML5 and the rise of modern JavaScript browser APIs [Tutorial]

Pavan Ramchandani
20 Jul 2018
15 min read
The HTMbrowserification arrived in 2008. HTML5, however, was so technologically advanced in 2008 that it was predicted that it would not be ready till at least 2022! However, that turned out to be incorrect, and here we are, with fully supported HTML5 and ES6/ES7/ES8-supported browsers. A lot of APIs used by HTML5 go hand in hand with JavaScript. Before looking at those APIs, let us understand a little about how JavaScript sees the web. This'll eventually put us in a strong position to understand various interesting, JavaScript-related things such as the Web Workers API, etc. In this article, we will introduce you to the most popular web languages HTML and JavaScript and how they came together to become the default platform for building modern front-end web applications. This is an excerpt from the book, Learn ECMAScript - Second Edition, written by Mehul Mohan and Narayan Prusty. The HTML DOM The HTML DOM is a tree version of how the document looks. Here is a very simple example of an HTML document: <!doctype HTML> <html> <head> <title>Cool Stuff!</title> </head> <body> <p>Awesome!</p> </body> </html> Here's how its tree version will look: The previous diagram is just a rough representation of the DOM tree. HTML tags consist of head and body; furthermore, the <body> tag consists of a <p> tag, whereas the <head> tag consists of the <title> tag. Simple! JavaScript has access to the DOM directly, and can modify the connections between these nodes, add nodes, remove nodes, change contents, attach event listeners, and so on. What is the Document Object Model (DOM)? Simply put, the DOM is a way to represent HTML or XML documents as nodes. This makes it easier for other programming languages to connect to a DOM-following page and modify it accordingly. To be clear, DOM is not a programming language. DOM provides JavaScript with a way to interact with web pages. You can think of it as a standard. Every element is part of the DOM tree, which can be accessed and modified with APIs exposed to JavaScript. DOM is not restricted to being accessed only by JavaScript. It is language-independent and there are several modules available in various languages to parse DOM (just like JavaScript) including PHP, Python, Java, and so on. As said previously, DOM provides JavaScript with a way to interact with it. How? Well, accessing DOM is as easy as accessing predefined objects in JavaScript: document. The DOM API specifies what you'll find inside the document object. The document object essentially gives JavaScript access to the DOM tree formed by your HTML document. If you notice, you cannot access any element at all without actually accessing the document object first. DOM methods/properties All HTML elements are objects in JavaScript. The most commonly used object is the document object. It has the whole DOM tree attached to it. You can query for elements on that. Let's look at some very common examples of these methods: getElementById method getElementsByTagName method getElementsByClassName method querySelector method querySelectorAll method By no means is this an exhaustive list of all methods available. However, this list should at least get you started with DOM manipulation. Use MDN as your reference for various other methods. Here's the link: https://developer.mozilla.org/en-US/docs/Web/API/Document#Methods. Modern JavaScript browser APIs HTML5 brought a lot of support for some awesome APIs in JavaScript, right from the start. Although some APIs were released with HTML5 itself (such as the Canvas API), some were added later (such as the Fetch API). Let's see some of these APIs and how to use them with some code examples. Page Visibility API - is the user still on the page? The Page Visibility API allows developers to run specific code whenever the page user is on goes in focus or out of foucs. Imagine you run a game-hosting site and want to pause the game whenever the user loses focus on your tab. This is the way to go! function pageChanged() { if (document.hidden) { console.log('User is on some other tab/out of focus') // line #1 } else { console.log('Hurray! User returned') // line #2 } } document.addEventListener("visibilitychange", pageChanged); We're adding an event listener to the document; it fires whenever the page is changed. Sure, the pageChanged function gets an event object as well in the argument, but we can simply use the document.hidden property, which returns a Boolean value depending on the page's visibility at the time the code was called. You'll add your pause game code at line #1 and your resume game code at line #2. navigator.onLine API – the user's network status The navigator.onLine API tells you if the user is online or not. Imagine building a multiplayer game and you want the game to automatically pause if the user loses their internet connection. This is the way to go here! function state(e) { if(navigator.onLine) { console.log('Cool we\'re up'); } else { console.log('Uh! we\'re down!'); } } window.addEventListener('offline', state); window.addEventListener('online', state); Here, we're attaching two event listeners to window global. We want to call the state function whenever the user goes offline or online. The browser will call the state function every time the user goes offline or online. We can access it if the user is offline or online with navigator.onLine, which returns a Boolean value of true if there's an internet connection, and false if there's not. Clipboard API - programmatically manipulating the clipboard The Clipboard API finally allows developers to copy to a user's clipboard without those nasty Adobe Flash plugin hacks that were not cross-browser/cross-device-friendly. Here's how you'll copy a selection to a user's clipboard: <script> function copy2Clipboard(text) { const textarea = document.createElement('textarea'); textarea.value = text; document.body.appendChild(textarea); textarea.focus(); textarea.setSelectionRange(0, text.length); document.execCommand('copy'); document.body.removeChild(textarea); } </script> <button onclick="copy2Clipboard('Something good!')">Click me!</button> First of all, we need the user to actually click the button. Once the user clicks the button, we call a function that creates a textarea in the background using the document.createElement method. The script then sets the value of the textarea to the passed text (this is pretty good!) We then focus on that textarea and select all the contents inside it. Once the contents are selected, we execute a copy with document.execCommand('copy'); this copies the current selection in the document to the clipboard. Since, right now, the value inside the textarea is selected, it gets copied to the clipboard. Finally, we remove the textarea from the document so that it doesn't disrupt the document layout. You cannot trigger copy2Clipboard without user interaction. I mean, obviously you can, but document.execCommand('copy') will not work if the event does not come from the user (click, double-click, and so on). This is a security implementation so that a user's clipboard is not messed around with by every website that they visit. The Canvas API - the web's drawing board HTML5 finally brought in support for <canvas>, a standard way to draw graphics on the web! Canvas can be used pretty much for everything related to graphics you can think of; from digitally signing with a pen, to creating 3D games on the web (3D games require WebGL knowledge, interested? - visit http://bit.ly/webgl-101). Let's look at the basics of the Canvas API with a simple example: <canvas id="canvas" width="100" height="100"></canvas> <script> const canvas = document.getElementById("canvas"); const ctx = canvas.getContext("2d"); ctx.moveTo(0,0); ctx.lineTo(100, 100); ctx.stroke(); </script> This renders the following: How does it do this? Firstly, document.getElementById('canvas') gives us the reference to the canvas on the document. Then we get the context of the canvas. This is a way to say what I want to do with the canvas. You could put a 3D value there, of course! That is indeed the case when you're doing 3D rendering with WebGL and canvas. Once we have a reference to our context, we can do a bunch of things and add methods provided by the API out-of-the-box. Here we moved the cursor to the (0, 0) coordinates. Then we drew a line till (100,100) (which is basically a diagonal on the square canvas). Then we called stroke to actually draw that on our canvas. Easy! Canvas is a wide topic and deserves a book of its own! If you're interested in developing awesome games and apps with Canvas, I recommend you start off with MDN docs: http://bit.ly/canvas-html5. The Fetch API - promise-based HTTP requests One of the coolest async APIs introduced in browsers is the Fetch API, which is the modern replacement for the XMLHttpRequest API. Have you ever found yourself using jQuery just for simplifying AJAX requests with $.ajax? If you have, then this is surely a golden API for you, as it is natively easier to code and read! However, fetch comes natively, hence, there are performance benefits. Let's see how it works: fetch(link) .then(data => { // do something with data }) .catch(err => { // do something with error }); Awesome! So fetch uses promises! If that's the case, we can combine it with async/await to make it look completely synchronous and easy to read! <img id="img1" alt="Mozilla logo" /> <img id="img2" alt="Google logo" /> const get2Images = async () => { const image1 = await fetch('https://cdn.mdn.mozilla.net/static/img/web-docs-sprite.22a6a085cf14.svg'); const image2 = await fetch('https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png'); console.log(image1); // gives us response as an object const blob1 = await image1.blob(); const blob2 = await image2.blob(); const url1 = URL.createObjectURL(blob1); const url2 = URL.createObjectURL(blob2); document.getElementById('img1').src = url1; document.getElementById('img2').src = url2; return 'complete'; } get2Images().then(status => console.log(status)); The line console.log(image1) will print the following: You can see the image1 response provides tons of information about the request. It has an interesting field body, which is actually a ReadableStream, and a byte stream of data that can be cast to a  Binary Large Object (BLOB) in our case. A blob object represents a file-like object of immutable and raw data. After getting the Response, we convert it into a blob object so that we can actually use it as an image. Here, fetch is actually fetching us the image directly so we can serve it to the user as a blob (without hot-linking it to the main website). Thus, this could be done on the server side, and blob data could be passed down a WebSocket or something similar. Fetch API customization The Fetch API is highly customizable. You can even include your own headers in the request. Suppose you've got a site where only authenticated users with a valid token can access an image. Here's how you'll add a custom header to your request: const headers = new Headers(); headers.append("Allow-Secret-Access", "yeah-because-my-token-is-1337"); const config = { method: 'POST', headers }; const req = new Request('http://myawesomewebsite.awesometld/secretimage.jpg', config); fetch(req) .then(img => img.blob()) .then(blob => myImageTag.src = URL.createObjectURL(blob)); Here, we added a custom header to our Request and then created something called a Request object (an object that has information about our Request). The first parameter, that is, http://myawesomewebsite.awesometld/secretimage.jpg, is the URL and the second is the configuration. Here are some other configuration options: Credentials: Used to pass cookies in a Cross-Origin Resource Sharing (CORS)-enabled server on cross-domain requests. Method: Specifies request methods (GET, POST, HEAD, and so on). Headers: Headers associated with the request. Integrity: A security feature that consists of a (possibly) SHA-256 representation of the file you're requesting, in order to verify whether the request has been tampered with (data is modified) or not. Probably not a lot to worry about unless you're building something on a very large scale and not on HTTPS. Redirect: Redirect can have three values: Follow: Will follow the URL redirects Error: Will throw an error if the URL redirects Manual: Doesn't follow redirect but returns a filtered response that wraps the redirect response Referrer: the URL that appears as a referrer header in the HTTP request. Accessing and modifying history with the history API You can access a user's history to some level and modify it according to your needs using the history API. It consists of the length and state properties: console.log(history, history.length, history.state); The output is as follows: {length: 4, scrollRestoration: "auto", state: null} 4 null In your case, the length could obviously be different depending on how many pages you've visited from that particular tab. history.state can contain anything you like (we'll come to its use case soon). Before looking at some handy history methods, let us take a look at the window.onpopstate event. Handling window.onpopstate events The window.onpopstate event is fired automatically by the browser when a user navigates between history states that a developer has set. This event is important to handle when you push to history object and then later retrieve information whenever the user presses the back/forward button of the browser. Here's how we'll program a simple popstate event: window.addEventListener('popstate', e => { console.log(e.state); // state data of history (remember history.state ?) }) Now we'll discuss some methods associated with the history object. Modifying history - the history.go(distance) method history.go(x) is equivalent to the user clicking his forward button x times in the browser. However, you can specify the distance to move, that is history.go(5); . This equivalent to the user hitting the forward button in the browser five times. Similarly, you can specify negative values as well to make it move backward. Specifying 0 or no value will simply refresh the page: history.go(5); // forwards the browser 5 times history.go(-1); // similar effect of clicking back button history.go(0); // refreshes page history.go(); // refreshes page Jumping ahead - the history.forward() method This method is simply the equivalent of history.go(1). This is handy when you want to just push the user to the page he/she is coming from. One use case of this is when you can create a full-screen immersive web application and on your screen there are some minimal controls that play with the history behind the scenes: if(awesomeButtonClicked && userWantsToMoveForward()) { history.forward() } Going back - the history.back() method This method is simply the equivalent of history.go(-1). A negative number, makes the history go backwards. Again, this is just a simple (and numberless) way to go back to a page the user came from. Its application could be similar to a forward button, that is, creating a full-screen web app and providing the user with an interface to navigate by. Pushing on the history - history.pushState() This is really fun. You can change the browser URL without hitting the server with an HTTP request. If you run the following JS in your browser, your browser will change the path from whatever it is (domain.com/abc/egh) to  /i_am_awesome (domain.com/i_am_awesome) without actually navigating to any page: history.pushState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); history.pushState({page2: "Packt"}, "This is page2", "/page2_packt"); // <-- state is currently here The History API doesn't care whether the page actually exists on the server or not. It'll just replace the URL as it is instructed. The  popstate event when triggered with the browser's back/forward button, will fire the function below and we can program it like this: window.onpopstate = e => { // when this is called, state is already updated. // e.state is the new state. It is null if it is the root state. if(e.state !== null) { console.log(e.state); } else { console.log("Root state"); } } To run this code, run the onpopstate event first, then the two lines of history.pushState previously. Then press your browser's back button. You should see something like: {myName: "Mehul"} which is the information related to the parent state. Press back button one more time and you'll see the message Root State. pushState does not fire onpopstate event. Only browsers' back/forward buttons do. Pushing on the history stack - history.replaceState() The history.replaceState() method is exactly like history.pushState(), the only difference is that it replaces the current page with another, that is, if you use history.pushState() and press the back button, you'll be directed to the page you came from. However, when you use history.replaceState() and you press the back button, you are not directed to the page you came from because it is replaced with the new one on the stack. Here's an example of working with the replaceState method: history.replaceState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); This replaces (instead of pushing) the current state with the new state. Although using the History API directly in your code may not be beneficial to you right now, many frameworks and libraries such as React, under the hood, use the History API to create a seamless, reload-less, smooth experience for the end user. If you found this article useful, do check out the book Learn ECMAScript, Second Edition to learn the ECMAScript standards for designing quality web applications. What's new in ECMAScript 2018 (ES9)? 8 recipes to master Promises in ECMAScript 2018 Build a foodie bot with JavaScript
Read more
  • 0
  • 0
  • 45312

article-image-handle-odoo-application-data-with-orm-api-tutorial
Sugandha Lahoti
19 Jul 2018
17 min read
Save for later

Handle Odoo application data with ORM API [Tutorial]

Sugandha Lahoti
19 Jul 2018
17 min read
The ORM API, allows you to write complex logic and wizards to provide a rich user interaction for your apps. The ORM provides few methods to programmatically interact with the Odoo data model and the data, called the Application Programming Interface (API). These start with the basic CRUD (create, read, update, delete) operations, but also include other operations, such as data export and import, or utility functions to aid the user interface and experience. It also provides some decorators which allow us, when adding new methods, to let the ORM know how they should be handled. In this article, we will learn how to use the most important API methods available for any Odoo Model, and the available API decorators to be used in our custom methods, depending on their purpose. We will also explore the API offered by the Discuss app since it provides the message and notification features for Odoo. The article is an excerpt from the book Odoo 11 Development Essentials - Third Edition, by Daniel Reis. All the code files in this post are available on Github. We will start by having a closer look at the API decorators. Understanding the ORM decorators ORM decorators are important for the ORM, and allow it to give those methods specific uses. Let's see the ORM decorators we have available, and when each should be used. Record handling decorators Most of the time, we want a custom method to perform some actions on a recordset. For this, we should use @api.multi, and in that case, the self argument will be the recordset to work with. The method's logic will usually include a for loop iterating on it. This is surely the most frequently used decorator. If no decorator is used on a model method, it will default to  @api.multi behavior. In some cases, the method is prepared to work with a single record (a singleton). Here we could use the  @api.one decorator, but this is not advised because for Version 9.0 it was announced it would be deprecated and may be removed in the future. Instead, we should use @api.multi and add to the method code a line with self.ensure_one(), to ensure it is a singleton as expected. Despite being deprecated, the @api.one decorator is still supported. So it's worth knowing that it wraps the decorated method, doing the for-loop iteration to feed it one record at a time. So, in an @api.one decorated method, self is guaranteed to be a singleton. The return values of each individual method call are aggregated as a list and then returned. The return value of @api.one can be tricky: it returns a list, not the data structure returned by the actual method. For example, if the method code returns a dict, the actual return value is a list of dict values. This misleading behavior was the main reason the method was deprecated. In some cases, the method is expected to work at the class level, and not on particular records. In some object-oriented languages this would be called a static method. These class-level static methods should be decorated with @api.model. In these cases, self should be used as a reference for the model, without expecting it to contain actual records. Methods decorated with @api.model cannot be used with user interface buttons. In those cases, @api.multi should be used instead. Specific purpose decorators A few other decorators have more specific purposes and are to be used together with the decorators described earlier: @api.depends(fld1,...) is used for computed field functions, to identify on what changes the (re)calculation should be triggered. It must set values on the computed fields, otherwise it will error. @api.constrains(fld1,...) is used for validation functions, and performs checks for when any of the mentioned fields are changed. It should not write changes in the data. If the checks fail, an exception should be raised. @api.onchange(fld1,...) is used in the user interface, to automatically change some field values when other fields are changed. The self argument is a singleton with the current form data, and the method should set values on it for the changes that should happen in the form. It doesn't actually write to database records, instead it provides information to change the data in the UI form. When using the preceding decorators, no return value is needed. Except for onchange methods that can optionally return a dict with a warning message to display in the user interface. As an example, we can use this to perform some automation in the To-Do form: when Responsible is set to an empty value, we will also empty the team list. For this, edit the todo_stage/models/todo_task_model.py file to add the following method: @api.onchange('user_id') def onchange_user_id(self): if not user_id: self.team_ids = None return { 'warning': { 'title': 'No Responsible', 'message': 'Team was also reset.' } } Here, we are using the @api.onchange decorator to attach some logic to any changes in the user_id field, when done through the user interface. Note that the actual method name is not relevant, but the convention is for its name to begin with onchange_. Inside an onchange method, self represents a single virtual record containing all the fields currently set in the record being edited, and we can interact with them. Most of the time, this is what we want to do: to automatically fill values in other fields, depending on the value set to the changed field. In this case, we are setting the team_ids field to an empty value. The onchange methods don't need to return anything, but they can return a dictionary containing a warning or a domain key: The warning key should describe a message to show in a dialogue window, such as: {'title': 'Message Title', 'message': 'Message Body'}. The domain key can set or change the domain attribute of other fields. This allows you to build more user-friendly interfaces, by having to-many fields list only the selection option that make sense for this case. The value for the domain key looks like this: {'team_ids': [('is_author', '=', True)]} Using the ORM built-in methods The decorators discussed in the previous section allow us to add certain features to our models, such as implementing validations and automatic computations. We also have the basic methods provided by the ORM, used mainly to perform CRUD (create, read, update and delete) operations on our model data. To read data, the main methods provided are search() and browse(). Now we will explore the write operations provided by the ORM, and how they can be extended to support custom logic. Methods for writing model data The ORM provides three methods for the three basic write operations: <Model>.create(values) creates a new record on the model. Returns the created record. <Recordset>.write(values) updates field values on the recordset. Returns nothing. <Recordset>.unlink() deletes the records from the database. Returns nothing. The values argument is a dictionary, mapping field names to values to write. In some cases, we need to extend these methods to add some business logic to be triggered whenever these actions are executed. By placing our logic in the appropriate section of the custom method, we can have the code run before or after the main operations are executed. Using the TodoTask model as an example, we can make a custom create(), which would look like this: @api.model def create(self, vals): # Code before create: should use the `vals` dict new_record = super(TodoTask, self).create(vals) # Code after create: can use the `new_record` created return new_record Python 3 introduced a simplified way to use super() that could have been used in the preceding code samples. We chose to use the Python 2 compatible form. If we don't mind breaking Python 2 support for our code, we can use the simplified form, without the arguments referencing the class name and self. For example: super().create(vals) A custom write() would follow this structure: @api.multi def write(self, vals): # Code before write: can use `self`, with the old values super(TodoTask, self).write(vals) # Code after write: can use `self`, with the updated values return True While extending create() and write()  opens up a lot of possibilities, remember in many cases we don't need to do that, since there are tools also available that may be better suited: For field values that are automatically calculated based on other fields, we should use computed fields. An example of this is to calculate a header total when the values of the lines are changed. To have field default values calculated dynamically, we can use a field default bound to a function instead of a fixed value. To have values set on other fields when a field is changed, we can use onchange functions. An example of this is when picking a customer, setting their currency as the document's currency that can later be manually changed by the user. Keep in mind that on change only works on form view interaction and not on direct write calls. For validations, we should use constraint functions decorated with @api.constraints(fld1,fld2,...). These are like computed fields but, instead of computing values, they are expected to raise errors. Consider carefully if you really need to use extensions to the create or write methods. In most cases, we just need to perform some validation or automatically compute some value, when the record is saved. But we have better tools for this: validations are best implemented with @api.constrains methods, and automatic calculations are better implemented as computed fields. In this case, we need to compute field values when saving. If, for some reason, computed fields are not a valid solution, the best approach is to have our logic at the top of the method, accumulating the changes needed into the vals dictionary that will be passed to the final super() call. For the write() method, having further write operations on the same model will lead to a recursion loop and end with an error when the worker process resources are exhausted. Please consider if this is really needed. If it is, a technique to avoid the recursion loop is to set a flag in the context. For example, we could add code such as the following: if not self.env.context.get('todo_task_writing'): self.with_context(todo_task_writing=True).write( some_values) With this technique, our specific logic is guarded by an if statement, and runs only if a specific marker is not found in the context. Furthermore, our self.write() operations should use with_context to set that marker. This combination ensures that the custom login inside the if statement runs only once, and is not triggered on further write() calls, avoiding the infinite loop. These are common extension examples, but of course any standard method available for a model can be inherited in a similar way to add our custom logic to it. Methods for web client use over RPC We have seen the most important model methods used to generate recordsets and how to write to them, but there are a few more model methods available for more specific actions, as shown here: read([fields]) is similar to the browse method, but, instead of a recordset, it returns a list of rows of data with the fields given as its argument. Each row is a dictionary. It provides a serialized representation of the data that can be sent through RPC protocols and is intended to be used by client programs and not in server logic. search_read([domain], [fields], offset=0, limit=None, order=None) performs a search operation followed by a read on the resulting record list. It is intended to be used by RPC clients and saves them the extra round trip needed when doing a search followed by a read on the results. Methods for data import and export The import and export operations, are also available from the ORM API, through the following methods: load([fields], [data]) is used to import data acquired from a CSV file. The first argument is the list of fields to import, and it maps directly to a CSV top row. The second argument is a list of records, where each record is a list of string values to parse and import, and it maps directly to the CSV data rows and columns. It implements the features of CSV data import, such as the external identifiers support. It is used by the web client Import feature. export_data([fields], raw_data=False) is used by the web client Export function. It returns a dictionary with a data key containing the data: a list of rows. The field names can use the .id and /id suffixes used in CSV files, and the data is in a format compatible with an importable CSV file. The optional raw_data argument allows for data values to be exported with their Python types, instead of the string representation used in CSV. Methods for the user interface The following methods are mostly used by the web client to render the user interface and perform basic interaction: name_get()  returns a list of (ID, name) tuples with the text representing each record. It is used by default for computing the display_name value, providing the text representation of relation fields. It can be extended to implement custom display representations, such as displaying the record code and name instead of only the name. name_search(name='', args=None, operator='ilike', limit=100) returns a list of (ID, name) tuples, where the display name matches the text in the name argument. It is used in the UI while typing in a relation field to produce the list with the suggested records matching the typed text. For example, it is used to implement product lookup both by name and by reference, while typing in a field to pick a product. name_create(name) creates a new record with only the title name to use for it. It is used in the UI for the "quick-create" feature, where you can quickly create a related record by just providing its name. It can be extended to provide specific defaults for the new records created through this feature. default_get([fields]) returns a dictionary with the default values for a new record to be created. The default values may depend on variables such as the current user or the session context. fields_get() is used to describe the model's field definitions, as seen in the View Fields option of the developer menu. fields_view_get() is used by the web client to retrieve the structure of the UI view to render. It can be given the ID of the view as an argument or the type of view we want using view_type='form'. For example, you may try this: self.fields_view_get(view_type='tree'). The Mail and Social features API Odoo has available global messaging and activity planning features, provided by the Discuss app, with the technical name mail. The mail module provides the mail.thread abstract class that makes it simple to add the messaging features to any model. To add the mail.thread features to the To-Do tasks, we just need to inherit from it: class TodoTask(models.Model): _name = 'todo.task' _inherit = ['todo.task', 'mail.thread'] After this, among other things, our model will have two new fields available. For each record (sometimes also called a document) we have: mail_follower_ids stores the followers, and corresponding notification preferences mail_message_ids lists all the related messages The followers can be either partners or channels. A partner represents a specific person or organization. A channel is not a particular person, and instead represents a subscription list. Each follower also has a list of message types that they are subscribed to. Only the selected message types will generate notifications for them. Message subtypes Some types of messages are called subtypes. They are stored in the mail.message.subtype model and accessible in the Technical | Email | Subtypes menu. By default, we have three message subtypes available: Discussions, with mail.mt_comment XMLID, used for the messages created with the Send message link. It is intended to send a notification. Activities, with mail.mt_activities XMLID, used for the messages created with the Schedule activity link. It is intended to send a notification. Note, with mail.mt_note XMLID, used for the messages created with the Log note link. It is not intended to send a notification. Subtypes have the default notification settings described previously, but users are able to change them for specific documents, for example, to mute a discussion they are not interested in. Other than the built-in subtypes, we can also add our own subtypes to customize the notifications for our apps. Subtypes can be generic or intended for a particular model. For the latter case, we should fill in the subtype's res_model field with the name of the model it should apply to. Posting messages Our business logic can make use of this messaging system to send notifications to users. To post a message we use the message_post() method. For example: self.message_post('Hello!') This adds a simple text message, but sends no notification to the followers. That is because by default the mail.mt_note subtype is used for the posted messages. But we can have the message posted with the particular subtype we want. To add a message and have it send notifications to the followers, we should use the following: self.message_post('Hello again!', subtype='mail.mt_comment') We can also add a subject line to the message by adding the subject parameter. The message body is HTML, so we can include markup for text effects, such as <b> for bold text or <i> for italics. The message body will be sanitized for security reasons, so some particular HTML elements may not make it to the final message. Adding followers Also interesting from a business logic viewpoint is the ability to automatically add followers to a document, so that they can then get the corresponding notifications. For this we have several methods available to add followers: message_subscribe(partner_ids=<list of int IDs>) adds Partners message_subscribe(channel_ids=<list of int IDs>) adds Channels message_subscribe_users(user_ids=<list of int IDs>) adds Users The default subtypes will be used. To force subscribing a specific list of subtypes, just add the subtype_ids=<list of int IDs> with the specific subtypes you want to be subscribed. In this article, we went through an explanation of the features the ORM API proposes, and how they can be used when creating our models. We also learned about the mail module and the global messaging features it provides. To look further into ORM, and have a deeper understanding of how recordsets work and can be manipulated, read our book Odoo 11 Development Essentials - Third Edition. ERP tool in focus: Odoo 11 Building Your First Odoo Application How to Scaffold a New module in Odoo 11
Read more
  • 0
  • 0
  • 36847
article-image-apollo-11-source-code-how-it-became-a-small-step-for-a-woman-and-a-huge-leap-for-software-engineering
Sugandha Lahoti
19 Jul 2018
5 min read
Save for later

Apollo 11 source code: A small step for a woman, and a huge leap for 'software engineering'

Sugandha Lahoti
19 Jul 2018
5 min read
Yesterday, reddit saw an explosion of discussion around the original Apollo 11 Guidance Computer (AGC) source code. The code in its entirety was uploaded on GitHub two years ago, thanks to former NASA intern, Chris Garry. And again it seems to have undergone significant updates this week looking at the timestamps on all the files in the repo. This is a project that will always hold a special place for all software professionals around the world. This is the project that made ‘software engineering’ a real discipline. What is AGC and why it mattered for Apollo 11? AGC was a digital computer produced for the Apollo program, installed on board the Apollo 11 Command Module (CM) and Lunar Module (LM). The AGC code is also referred to as ‘COLOSSUS 2A’ and was written in AGC assembly language and stored on rope memory. On any given Apollo mission, there were two AGCs, one for the CM, and one for the LM. The two AGCs were identical and interchangeable. However, their software differed as both the LM and the CM performed different tasks pertaining to the spacecraft. The CM launched the three astronauts to the moon, and back again. The LM helped in the landing of two of the astronauts on the moon while the third astronaut remained in the CM, in orbit around the moon. The woman who coined the term, ‘software engineering’ The AGC code was brought to life by Margaret Hamilton, director of software engineering for the project. In a male-dominated world of tech and engineering of that time, Margaret was an exception. She led a team credited with developing the software for Apollo and Skylab, keeping her head high even through backlash. “People used to say to me, ‘How can you leave your daughter? How can you do this?”  She went on to become the founder and CEO of Hamilton Technologies, Inc. and was also awarded the Presidential Medal of Freedom in 2016. Hamilton is considered one of the pioneers of software engineering, credited for actually coining the term “software engineering”. She first started using the term during the early Apollo missions wanting to give software the same legitimacy as other disciplines. At that time, it was not taken seriously but over time software engineering has become an IEEE Standard. What can we learn from the AGC code developers? Understandably, the AGC specifications and processing power are very underrated as compared to the technology of today. Some still wittingly call it a calculator, instead of a computer. Others say, that the CPU in a microwave oven is probably more powerful than an AGC. Inspite of being a very basic technology in terms of processing power and speed, the Apollo 11 spacecraft was able to complete the first ever manned mission to the moon and back. This is not just a huge testament to the original programming team’s ingenuity and resourcefulness but also of their grit and meticulousness. One would think such a bunch produced serious (boring) code that has flawless execution. Read between the (code) lines and you see a team that just enjoyed every moment of writing the code with quirky naming conventions and humorous notes inside the comments. Back to the present As soon as the code was uploaded on Github two years ago and even now, coders and software programmers all over the world, are dissecting it, particularly interested in the quirky English-descriptions of code explanations. People on Reddit are terming the code files as real programming that doesn’t rely on APIs to do the heavy lifting. People are also loving the naming convention of the source code files and their programs which were 1960s inspired light-hearted jokes. For example, the BURN_BABY_BURN--MASTER IGNITION ROUTINE for the master ignition routine, and the PINBALL_GAME_BUTTONS_AND_LIGHTS.agc for keyboard and display code. Even the programs are quirky. The LUNAR_LANDING_GUIDANCE_EQUATIONS.s, file ended up having two temporary lines of code as permanent. You can read more such interesting reddit comments. However, a point worth noting is that Margaret and by extension, women in tech, are conspicuously missing in this rich discussion. We can start seeing real change only when discussion forums start including various facets of the tool/tech under discussion. People behind the tech are an important facet, and more so when they are in the minority. You can also read The Apollo Guidance Computer: Architecture and Operation the inside scoop on how the AGC functioned, what kind of design decisions and software choices the programmers had to made based on the features and limitations of the AGC among other insights. The github repo for the original Apollo 11 source code also contains material for further reading. Is space the final frontier for AI? NASA to reveal Kepler’s latest AI backed discovery NASA’s Kepler discovers a new exoplanet using Google’s Machine Learning Meet CIMON, the first AI robot to join the astronauts aboard ISS
Read more
  • 0
  • 1
  • 23827

article-image-google-ai-releases-cirq-and-open-fermion-cirq-to-boost-quantum-computation
Savia Lobo
19 Jul 2018
3 min read
Save for later

Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation

Savia Lobo
19 Jul 2018
3 min read
Google AI Quantum team announced two releases at the First International Workshop on Quantum Software and Quantum Machine Learning(QSML) yesterday. Firstly the public alpha release of Cirq, an open source framework for NISQ computers. The second release is OpenFermion-Cirq, an example of a Cirq-based application enabling near-term algorithms. Noisy Intermediate Scale Quantum (NISQ) computers are devices including ~50 - 100 qubits and high fidelity quantum gates enhance the quantum algorithms such that they can understand the power that these machines uphold. However, quantum algorithms for the quantum computers have their limitations such as A poor mapping between the algorithms and the machines Also, some quantum processors have complex geometric constraints These and other nuances inevitably lead to wasted resources and faulty computations. Cirq comes as a great help for researchers here. It is focussed on near-term questions, which help researchers to understand whether NISQ quantum computers are capable of solving computational problems of practical importance. It is licensed under Apache 2 and is free to be either embedded or modified within any commercial or open source package. Cirq highlights With Cirq, researchers can write quantum algorithms for specific quantum processors. It provides a fine-tuned user control over quantum circuits by, specifying gate behavior using native gates, placing these gates appropriately on the device, and scheduling the timing of these gates within the constraints of the quantum hardware. Other features of Cirq include: Allows users to leverage the most out of NISQ architectures with optimized data structures to write and compile the quantum circuits. Supports running of the algorithms locally on a simulator Designed to easily integrate with future quantum hardware or larger simulators via the cloud. OpenFermion-Cirq highlights Google AI Quantum team also released OpenFermion-Cirq, which is an example of a CIrq-based application that enables the near-term algorithms.  OpenFermion is a platform for developing quantum algorithms for chemistry problems. OpenFermion-Cirq extends the functionality of OpenFermion by providing routines and tools for using Cirq for compiling and composing circuits for quantum simulation algorithms. An instance of the OpenFermion-Cirq is, it can be used to easily build quantum variational algorithms for simulating properties of molecules and complex materials. While building Cirq, the Google AI Quantum team worked with early testers to gain feedback and insight into algorithm design for NISQ computers. Following are some instances of Cirq work resulting from the early adopters: Zapata Computing: simulation of a quantum autoencoder (example code, video tutorial) QC Ware: QAOA implementation and integration into QC Ware’s AQUA platform (example code, video tutorial) Quantum Benchmark: integration of True-Q software tools for assessing and extending hardware capabilities (video tutorial) Heisenberg Quantum Simulations: simulating the Anderson Model Cambridge Quantum Computing: integration of proprietary quantum compiler t|ket> (video tutorial) NASA: architecture-aware compiler based on temporal-planning for QAOA (slides) and simulator of quantum computers (slides) The team also announced that it is using Cirq to create circuits that run on Google’s Bristlecone processor. Their future plans include making the Bristlecone processor available in cloud with Cirq as the interface for users to write programs for this processor. To know more about both the releases, check out the GitHub repositories of each Cirq and OpenFermion-Cirq. Q# 101: Getting to know the basics of Microsoft’s new quantum computing language Google Bristlecone: A New Quantum processor by Google’s Quantum AI lab Quantum A.I. : An intelligent mix of Quantum+A.I.
Read more
  • 0
  • 0
  • 26786

article-image-postgis-extension-pgrouting-for-calculating-driving-distance-tutorial
Pravin Dhandre
19 Jul 2018
5 min read
Save for later

PostGIS extension: pgRouting for calculating driving distance [Tutorial]

Pravin Dhandre
19 Jul 2018
5 min read
pgRouting is an extension of PostGIS and PostgreSQL geospatial database. It adds routing and other network analysis functionality. In this tutorial we will learn to work with pgRouting tool in estimating the driving distance from all nearby nodes which can be very useful in supply chain, logistics and transportation based applications. This tutorial is an excerpt from a book written by Mayra Zurbaran,Pedro Wightman, Paolo Corti, Stephen Mather, Thomas Kraft and Bborie Park titled PostGIS Cookbook - Second Edition. Driving distance is useful when user sheds are needed that give realistic driving distance estimates, for example, for all customers with five miles driving, biking, or walking distance. These estimates can be contrasted with buffering techniques, which assume no barrier to travelling and are useful for revealing the underlying structures of our transportation networks relative to individual locations. Driving distance (pgr_drivingDistance) is a query that calculates all nodes within the specified driving distance of a starting node. This is an optional function compiled with pgRouting; so if you compile pgRouting yourself, make sure that you enable it and include the CGAL library, an optional dependency for pgr_drivingDistance. We will start by loading a test dataset. You can get some really basic sample data from https://docs.pgrouting.org/latest/en/sampledata.html. In the following example, we will look at all users within a distance of three units from our starting point—that is, a proposed bike shop at node 2: SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ); The preceding command gives the following output: As usual, we just get a list from the pgr_drivingDistance table that, in this case, comprises sequence, node, edge cost, and aggregate cost. PgRouting, like PostGIS, gives us low-level functionality; we need to reconstruct what geometries we need from that low-level functionality. We can use that node ID to extract the geometries of all of our nodes by executing the following script: WITH DD AS ( SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ) ) SELECT ST_AsText(the_geom) FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node; The preceding command gives the following output: But the output seen is just a cluster of points. Normally, when we think of driving distance, we visualize a polygon. Fortunately, we have the pgr_alphaShape function that provides us that functionality. This function expects id, x, and y values for input, so we will first change our previous query to convert to x and y from the geometries in edge_table_vertices_pgr: WITH DD AS ( SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ) ) SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node; The output is as follows: Now we can wrap the preceding script up in the alphashape function: WITH alphashape AS ( SELECT pgr_alphaShape(' WITH DD AS ( SELECT * FROM pgr_drivingDistance( ''SELECT id, source, target, cost FROM chp06.edge_table'', 2, 3 ) ), dd_points AS( SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node ) SELECT * FROM dd_points ') ), So first, we will get our cluster of points. As we did earlier, we will explicitly convert the text to geometric points: alphapoints AS ( SELECT ST_MakePoint((pgr_alphashape).x, (pgr_alphashape).y) FROM alphashape ), Now that we have points, we can create a line by connecting them: alphaline AS ( SELECT ST_Makeline(ST_MakePoint) FROM alphapoints ) SELECT ST_MakePolygon(ST_AddPoint(ST_Makeline, ST_StartPoint(ST_Makeline))) FROM alphaline; Finally, we construct the line as a polygon using ST_MakePolygon. This requires adding the start point by executing ST_StartPoint in order to properly close the polygon. The complete code is as follows: WITH alphashape AS ( SELECT pgr_alphaShape(' WITH DD AS ( SELECT * FROM pgr_drivingDistance( ''SELECT id, source, target, cost FROM chp06.edge_table'', 2, 3 ) ), dd_points AS( SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node ) SELECT * FROM dd_points ') ), alphapoints AS ( SELECT ST_MakePoint((pgr_alphashape).x, (pgr_alphashape).y) FROM alphashape ), alphaline AS ( SELECT ST_Makeline(ST_MakePoint) FROM alphapoints ) SELECT ST_MakePolygon( ST_AddPoint(ST_Makeline, ST_StartPoint(ST_Makeline)) ) FROM alphaline; Our first driving distance calculation can be better understood in the context of the following diagram, where we can reach nodes 9, 11, 13 from node 2 with a driving distance of 3: With this,  you can calculate the most optimistic distance route across different nodes in your transportation network. Want to explore more with PostGIS, check out PostGIS Cookbook - Second Edition and get access to complete range of PostGIS techniques and related extensions for better analytics on your spatial information. Top 7 libraries for geospatial analysis Using R to implement Kriging - A Spatial Interpolation technique for Geostatistics data Learning R for Geospatial Analysis
Read more
  • 0
  • 0
  • 19754
article-image-setting-up-an-ethereum-development-environment-tutorial
Packt Editorial Staff
18 Jul 2018
8 min read
Save for later

How to set up an Ethereum development environment [Tutorial]

Packt Editorial Staff
18 Jul 2018
8 min read
There are various ways to develop Ethereum blockchain. We will look at the mainstream options in this article which are: Test networks How to setup Ethereum private net This tutorial is extracted from the book Mastering Blockchain - Second Edition written by Imran Bashir. There are multiple ways to develop smart contracts on Ethereum. The usual and sensible approach is to develop and test Ethereum smart contracts either on a local private net or a simulated environment, and then it can be deployed on a public testnet. After all the relevant tests are successful on public testnet, the contracts can then be deployed to the public mainnet. There are however variations in this process, and many developers opt to only develop and test contracts on locally simulated environments. Then deploy on public mainnet or their private production blockchain networks. Developing on a simulated environment and then deploying directly to a public network can lead to faster time to production. As setting up private networks may take longer compared to setting a local development environment with a blockchain simulator. Let's start with connecting to a test network. Ethereum connection on test networks The Ethereum Go client (https://geth.ethereum.org) Geth, can be connected to the test network using the following command: $ geth --testnet A sample output is shown in the following screenshot. The screenshot shows the type of the network chosen and various other pieces of information regarding the blockchain download: The output of the geth command connecting to Ethereum test net A blockchain explorer for testnet is located at https://ropsten.etherscan.io can be used to trace transactions and blocks on the Ethereum test network. There are other test networks available too, such as Frontier, Morden, Ropsten, and Rinkeby. Geth can be issued with a command-line flag to connect to the desired network: --testnet: Ropsten network: pre-configured proof-of-work test network --rinkeby: Rinkeby network: pre-configured proof-of-authority test network --networkid value: Network identifier (integer, 1=Frontier, 2=Morden (disused), 3=Ropsten, 4=Rinkeby) (default: 1) Now let us do some experiments with building a private network and then we will see how a contract can be deployed on this network using the Mist and command-line tools. Setting up a private net Private net allows the creation of an entirely new blockchain. This is different from testnet or mainnet in the sense that it uses its on-genesis block and network ID. In order to create private net, three components are needed: Network ID The Genesis File Data directory to store blockchain data. Even though the data directory is not strictly required to be mentioned, if there is more than one blockchain already active on the system, then the data directory should be specified so that a separate directory is used for the new blockchain. On the mainnet, the Geth Ethereum client is capable of discovering boot nodes by default as they are hardcoded in the Geth client, and connects automatically. But on a private net, Geth needs to be configured by specifying appropriate flags and configuration in order for it to be discoverable by other peers or to be able to discover other peers. We will see how this is achieved shortly. In addition to the previously mentioned three components, it is desirable that you disable node discovery so that other nodes on the internet cannot discover your private network and it is secure. If other networks happen to have the same genesis file and network ID, they may connect to your private net. The chance of having the same network ID and genesis block is very low, but, nevertheless, disabling node discovery is good practice, and is recommended. In the following section, all these parameters are discussed in detail with a practical example. Network ID Network ID can be any positive number except 1 and 3, which are already in use by Ethereum mainnet and testnet (Ropsten), respectively. Network ID 786 has been chosen for the example private network discussed later in this section. The genesis file The genesis file contains the necessary fields required for a custom genesis block. This is the first block in the network and does not point to any previous block. The Ethereum protocol performs checking in order to ensure that no other node on the internet can participate in the consensus mechanism unless they have the same genesis block. Chain ID is usually used as an identification of the network. A custom genesis file that will be used later in the example is shown here: { "nonce": "0x0000000000000042", "timestamp": "0x00", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "extraData": "0x00", "gasLimit": "0x8000000", "difficulty": "0x0400", "mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x3333333333333333333333333333333333333333", "alloc": { }, "config": { "chainId": 786, "homesteadBlock": 0, "eip155Block": 0, "eip158Block": 0 } } This file is saved as a text file with the JSON extension; for example, privategenesis.json. Optionally, Ether can be pre-allocated by specifying the beneficiary's addresses and the amount of Wei, but it is usually not necessary as being on the private network, Ether can be mined very quickly. In order to pre-allocate a section can be added to the genesis file, as shown here: "alloc": { "0xcf61d213faa9acadbf0d110e1397caf20445c58f ": { "balance": "100000" }, } Now let's see what each of these parameters mean. nonce: This is a 64-bit hash used to prove that PoW has been sufficiently completed. This works in combination with the mixhash parameter. timestamp: This is the Unix timestamp of the block. This is used to verify the sequence of the blocks and for difficulty adjustment. For example, if blocks are being generated too quickly that difficulty goes higher. parentHash: This is always zero being the genesis (first) block as there is no parent of the first block. extraData: This parameter allows a 32-bit arbitrary value to be saved with the block. gasLimit: This is the limit on the expenditure of gas per block. difficulty: This parameter is used to determine the mining target. It represents the difficulty level of the hash required to prove the PoW. mixhash: This is a 256-bit hash which works in combination with nonce to prove that sufficient amount of computational resources has been spent in order to complete the PoW requirements. coinbase: This is the 160-bit address where the mining reward is sent to as a result of successful mining. alloc: This parameter contains the list of pre-allocated wallets. The long hex digit is the account to which the balance is allocated. config: This section contains various configuration information defining chain ID, and blockchain hard fork block numbers. This parameter is not required to be used in private networks. Data directory This is the directory where the blockchain data for the private Ethereum network will be saved. For example, in the following example, it is ~/etherprivate/. In the Geth client, a nu mber of parameters are specified in order to launch, further fine-tune the configuration, and launch the private network. These flags are listed here. Flags and their meaning The following are the flags used with the Geth client: --nodiscover: This flag ensures that the node is not automatically discoverable if it happens to have the same genesis file and network ID. --maxpeers: This flag is used to specify the number of peers allowed to be connected to the private net. If it is set to 0, then no one will be able to connect, which might be desirable in a few scenarios, such as private testing. --rpc: This is used to enable the RPC interface in Geth. --rpcapi: This flag takes a list of APIs to be allowed as a parameter. For example, eth, web3 will enable the Eth and Web3 interface over RPC. --rpcport: This sets up the TCP RPC port; for example: 9999. --rpccorsdomain: This flag specifies the URL that is allowed to connect to the private Geth node and perform RPC operations. cors in --rpccorsdomain means cross-origin resource sharing. --port: This specifies the TCP port that will be used to listen to the incoming connections from other peers. --identity: This flag is a string that specifies the name of a private node. Static nodes If there is a need to connect to a specific set of peers, then these nodes can be added to a file where the chaindata and keystore files are saved. For example, in the ~/etherprivate/ directory. The filename should be static- nodes.json. This is valuable in a private network because this way the nodes can be discovered on a private network. An example of the JSON file is shown as follows: [ "enode:// 44352ede5b9e792e437c1c0431c1578ce3676a87e1f588434aff1299d30325c233c8d426fc5 7a25380481c8a36fb3be2787375e932fb4885885f6452f6efa77f@xxx.xxx.xxx.xxx:TCP_P ORT" ] Here, xxx is the public IP address and TCP_PORT can be any valid and available TCP port on the system. The long hex string is the node ID. To summarize, we explored Ethereum test networks and how-to setup private Ethereum networks. Learn about cryptography and cryptocurrencies from this book Mastering Blockchain - Second Edition, to build highly secure, decentralized applications and conduct trusted in-app transactions. Everything you need to know about Ethereum Will Ethereum eclipse Bitcoin? The trouble with Smart Contracts
Read more
  • 0
  • 0
  • 19738

article-image-furthering-the-net-neutrality-debate-gop-proposes-the-21st-century-internet-act
Sugandha Lahoti
18 Jul 2018
3 min read
Save for later

Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act

Sugandha Lahoti
18 Jul 2018
3 min read
GOP Rep. Mike Coffman has proposed a new bill to solidify the principles of net neutrality into law, rather than them being a set of rules to be modified by the FCC every year. The bill known as the 21st Century Internet Act would ban providers from blocking, throttling, or offering paid fast lanes. It will also forbid them from participating in paid prioritization and charging access fees from edge providers. It could take some time for this amendment to be voted on in the Congress. It mostly depends on the makeup of Congress after the midterm elections. The 21st Century Internet Act modifies the Communications Act of 1934 and adds a new Title VIII section full of conditions specific to internet providers. This new title permanently codifies into law the ‘four corners’ of net neutrality”. The amendment proposes these measures: No Blocking A broadband internet access service provider can not block lawful content, or charge an edge provider a fee to avoid blocking of content. No Throttling The service provider cannot degrade and enhance (slow down or speed up) the internet traffic. No Paid prioritization The internet access provider may not engage in paid preferential treatment. No unreasonable Interference The service provider cannot interfere with the ability of end users to select, the internet access service of their choice. This bill aims to settle the long ongoing debate over whether internet access is an information service or a telecommunications service. In his letter to FCC Chairman Ajit Pai, Coffman mentions, “The Internet has been and remains a transformative tool, and I am concerned any action you may take to alter the rules under which it functions may well have significant unanticipated negative consequences.” As far as the FCC’s role is concerned, The 21st Century Internet act will solidify the rules of net neutrality, barring the FCC from modifying it. The commision will solely be responsible for watching over the bill’s implementation and enforce the law. This would include investigating unfair acts or practices, such as false advertising, misrepresenting the product etc. The Senate has already voted to save net neutrality, by passing the CRA measure back in May 2018. The Congressional Review Act, or CRA received 52-47 vote, overturning the FCC and taking net neutrality rules off the books. The 21st century Internet Act is being seen in a good light by The Internet Association, which represents Google, Facebook, Netflix and others, who commended Coffman on his bill and called it a "step in the right direction." For the rest of us, it will be quite interesting to see the bill’s progress and its fate as it goes through the voting process and then into the White House for final approval. 5 reasons why the government should regulate technology DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections Tech, unregulated: Washington Post warns Robocalls could get worse
Read more
  • 0
  • 0
  • 12617
Modal Close icon
Modal Close icon