Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-mixing-aspnet-webforms-and-aspnet-mvc
Packt
12 Oct 2009
6 min read
Save for later

Mixing ASP.NET Webforms and ASP.NET MVC

Packt
12 Oct 2009
6 min read
Ever since Microsoft started working on the ASP.NET MVC framework, one of the primary concerns was the framework's ability to re-use as many features as possible from ASP.NET Webforms. In this article by Maarten Balliauw, we will see how we can mix ASP.NET Webforms and ASP.NET MVC in one application and how data is shared between both these technologies. (For more resources on .NET, see here.) Not every ASP.NET MVC web application will be built from scratch. Several projects will probably end up migrating from classic ASP.NET to ASP.NET MVC. The question of how to combine both technologies in one application arises—is it possible to combine both ASP.NET Webforms and ASP.NET MVC in one web application? Luckily, the answer is yes. Combining ASP.NET Webforms and ASP.NET MVC in one application is possible—in fact, it is quite easy. The reason for this is that the ASP.NET MVC framework has been built on top of ASP.NET. There's actually only one crucial difference: ASP.NET lives in System.Web, whereas ASP.NET MVC lives in System.Web, System.Web.Routing, System.Web.Abstractions, and System.Web.Mvc. This means that adding these assemblies as a reference in an existing ASP.NET application should give you a good start on combining the two technologies. Another advantage of the fact that ASP.NET MVC is built on top of ASP.NET is that data can be easily shared between both of these technologies. For example, the Session state object is available in both the technologies, effectively enabling data to be shared via the Session state. Plugging ASP.NET MVC into an existing ASP.NET application An ASP.NET Webforms application can become ASP.NET MVC enabled by following some simple steps. First of all, add a reference to the following three assemblies to your existing ASP.NET application: System.Web.Routing System.Web.Abstractions System.Web.Mvc After adding these assembly references, the ASP.NET MVC folder structure should be created. Because the ASP.NET MVC framework is based on some conventions (for example, controllers are located in Controllers), these conventions should be respected. Add the folder Controllers, Views, and Views | Shared to your existing ASP.NET application. The next step in enabling ASP.NET MVC in an ASP.NET Webforms application is to update the web.config file, with the following code: < ?xml version="1.0"?> <configuration> <system.web> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Abstractions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <pages> <namespaces> <add namespace="System.Web.Mvc"/> <add namespace="System.Web.Mvc.Ajax"/> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing"/> <add namespace="System.Linq"/> <add namespace="System.Collections.Generic"/> </namespaces> </pages> <httpModules> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> </system.web> </configuration> Note that your existing ASP.NET Webforms web.config should not be replaced by the above web.config! The configured sections should be inserted into an existing web.config file in order to enable ASP.NET MVC. There's one thing left to do: configure routing. This can easily be done by adding the default ASP.NET MVC's global application class contents into an existing (or new) global application class, Global.asax. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace MixingBothWorldsExample { public class Global : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } } } This code registers a default ASP.NET MVC route, which will map any URL of the form /Controller/Action/Idinto a controller instance and action method. There's one difference with an ASP.NET MVC application that needs to be noted—a catch-all route is defined in order to prevent a request for ASP.NET Webforms to be routed into ASP.NET MVC. This catch-all route looks like this: routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); This is basically triggered on every request ending in .aspx. It tells the routing engine to ignore this request and leave it to ASP.NET Webforms to handle things. With the ASP.NET MVC assemblies referenced, the folder structure created, and the necessary configurations in place, we can now start adding controllers and views. Add a new controller in the Controllers folder, for example, the following simpleHomeController: using System.Web.Mvc; namespace MixingBothWorldsExample.Controllers { public class HomeController : Controller { public ActionResult Index() { ViewData["Message"] = "This is ASP.NET MVC!"; return View(); } } } The above controller will simply render a view, and pass it a message through the ViewData dictionary. This view, located in Views | Home | Index.aspx, would look like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Index.aspx.cs" Inherits="MixingBothWorldsExample.Views.Home.Index" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head id="Head1" runat="server"> <title></title> </head> <body> <div> <h1><%=Html.Encode(ViewData["Message"]) %></h1> </div> </body> </html> The above view renders a simple HTML page and renders the ViewData dictionary's message as the page title.
Read more
  • 0
  • 1
  • 46362

article-image-understanding-dependencies-c-application
Packt
05 Apr 2017
9 min read
Save for later

Understanding the Dependencies of a C++ Application

Packt
05 Apr 2017
9 min read
This article by Richard Grimes, author of the book, Beginning C++ Programming explains the dependencies of a C++ application. A C++ project will produce an executable or library, and this will be built by the linker from object files. The executable or library is dependent upon these object files. An object file will be compiled from a C++ source file (and potentially one or more header files). The object file is dependent upon these C++ source and header files. Understanding dependencies is important because it helps you understand the order to compile the files in your project, and it allows you to make your project builds quicker by only compiling those files that have changed. (For more resources related to this topic, see here.) Libraries When you include a file within your source file the code within that header file will be accessible to your code. Your include file may contain whole function or class definitions (these will be covered in later chapters) but this will result in a problem: multiple definitions of a function or class. Instead, you can declare a class or function prototype, which indicates how calling code will call the function without actually defining it. Clearly the code will have to be defined elsewhere, and this could be a source file or a library, but the compiler will be happy because it only sees one definition. A library is code that has already been defined, it has been fully debugged and tested, and therefore users should not need to have access to the source code. The C++ Standard Library is mostly shared through header files, which helps you when you debug your code, but you must resist any temptation to edit these files. Other libraries will be provided as compiled libraries. There are essentially two types of compiled libraries: static libraries and dynamic link libraries. If you use a static library then the compiler will copy the compiled code that you use from the static library and place it in your executable. If you use a dynamic link (or shared) library then the linker will add information used during runtime (it may be when the executable is loaded, or it may even be delayed until the function is called) to load the shared library into memory and access the function. Windows uses the extension lib for static libraries and dll for dynamic link libraries. GNU gcc uses the extension a for static libraries and so for shared libraries. If you use library code in a static or dynamic link library the compiler will need to know that you are calling a function correctly—to make sure your code calls a function with the correct number of parameters and correct types. This is the purpose of a function prototype—it gives the compiler the information it needs to know about calling the function without providing the actual body of the function, the function definition. In general, the C++ Standard Library will be included into your code through the standard header files. The C Runtime Library (which provides some code for the C++ Standard Library) will be static linked, but if the compiler provides a dynamic linked version you will have a compiler option to use this. Pre-compiled Headers When you include a file into your source file the preprocessor will include the contents of that file (after taking into account any conditional compilation directives) and recursively any files included by that file. As illustrated earlier, this could result in thousands of lines of code. As you develop your code you will often compile the project so that you can test the code. Every time you compile your code the code defined in the header files will also be compiled even though the code in library header files will not have changed. With a large project this can make the compilation take a long time. To get around this problem compilers often offer an option to pre-compile headers that will not change. Creating and using precompiled headers is compiler specific. For example, with gcc you compile a header as if it is a C++ source file (with the /x switch) and the compiler creates a file with an extension of gch. When gcc compiles source files that use the header it will search for the gch file and if it finds the precompiled header it will use that, otherwise it will use the header file. In Visual C++ the process is a bit more complicated because you have to specifically tell the compiler to look for a precompiled header when it compiles a source file. The convention in Visual C++ projects is to have a source file called stdafx.cpp which has a single line that includes the file stdafx.h. You put all your stable header file includes in stdafx.h. Next, you create a precompiled header by compiling stdafx.cpp using the /Yc compiler option to specify that stdafx.h contains the stable headers to compile. This will create a pch file (typically, Visual C++ will name it after your project) containing the code compiled up to the point of the inclusion of the stdafx.h header file. Your other source files must include the stdafx.h header file as the first header file, but it may also include other files. When you compile your source files you use the /Yu switch to specify the stable header file (stdafx.h) and the compiler will use the precompiled header pch file instead of the header. When you examine large projects you will often find precompiled headers are used, and as you can see, it alters the file structure of the project. The example later in this chapter will show how to create and use precompiled headers. Project Structure It is important to organize your code into modules to enable you to maintain it effectively. Even if you are writing C-like procedural code (that is, your code involves calls to functions in a linear way) you will also benefit from organizing it into modules. For example, you may have functions that manipulate strings and other functions that access files, so you may decide to put the definition of the string functions in one source file, string.cpp, and the definition of the file functions in another file, file.cpp. So that other modules in the project can use these files you must declare the prototypes of the functions in a header file and include that header in the module that uses the functions. There is no absolute rule in the language about the relationship between the header files and the source files that contain the definition of the functions. You may have a header file called string.h for the functions in string.cpp and a header file called file.h for the functions in file.cpp. Or you may have just one file called utilities.h that contains the declarations for all the functions in both files. The only rule that you have to abide by is that at compile time the compiler must have access to a declaration of the function in the current source file, either through a header file, or the function definition itself. The compiler will not look forward in a source file, so if a function calls another function in the same source file that called function must have already been defined before the calling function, or there must be a prototype declaration. This leads to a typical convention of having a header file associated with each source file that contains the prototypes of the functions in the source file, and the source file includes this header. This convention becomes more important when you write classes. Managing Dependencies When a project is built with a building tool, checks are performed to see if the output of the build exist and if not, perform the appropriate actions to build it. Common terminology is that the output of a build step is called a target and the inputs of the build step (for example, source files) are the dependencies of that target. Each target's dependencies are the files used to make them. The dependencies may themselves be a target of a build action and have their own dependencies. For example, the following picture shows the dependencies in a project: In this project there are three source files (main.cpp, file1.cpp, file2.cpp) each of these includes the same header utils.h which is precompiled (and hence why there is a fourth source file, utils.cpp, that only contains utils.h). All of the source files depend on utils.pch, which in turn depends upon utils.h. The source file main.cpp has the main function and calls functions in the other two source files (file1.cpp and file2.cpp), and accesses the functions through the associated header files file1.h and file2.h. On the first compilation the build tool will see that the executable depends on the four object files and so it will look for the rule to build each one. In the case of the three C++ source files this means compiling the cpp files, but since utils.obj is used to support the precompiled header, the build rule will be different to the other files. When the build tool has made these object files it will then link them together along with any library code (not shown here). Subsequently, if you change file2.cpp and build the project, the build tool will see that only file2.cpp has changed and since only file2.obj depends on file2.cpp all the make tool needs to do is compile file2.cpp and then link the new file2.obj with the existing object files to create the executable. If you change the header file, file2.h, the build tool will see that two files depend on this header file, file2.cpp and main.cpp and so the build tool will compile these two source files and link the new two object files file2.obj and main.obj with the existing object files to form the executable. If, however, the precompiled header source file, util.h, changes it means that all of the source files will have to be compiled. Summary For a small project, dependencies are easy to manage, and as you have seen, for a single source file project you do not even have to worry about calling the linker because the compiler will do that automatically. As a C++ project gets bigger, managing dependencies gets more complex and this is where development environments like Visual C++ become vital. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Preparing to Build Your Own GIS Application [article] Writing a Fully Native Application [article]
Read more
  • 0
  • 0
  • 45642

article-image-python-design-patterns-depth-singleton-pattern
Packt
15 Feb 2016
14 min read
Save for later

Python Design Patterns in Depth: The Singleton Pattern

Packt
15 Feb 2016
14 min read
There are situations where you need to create only one instance of data throughout the lifetime of a program. This can be a class instance, a list, or a dictionary, for example. The creation of a second instance is undesirable. This can result in logical errors or malfunctioning of the program. The design pattern that allows you to create only one instance of data is called singleton. In this article, you will learn about module-level, classic, and borg singletons; you'll also learn about how they work, when to use them, and build a two-threaded web crawler that uses a singleton to access the shared resource. (For more resources related to this topic, see here.) Singleton is the best candidate when the requirements are as follows: Controlling concurrent access to a shared resource If you need a global point of access for the resource from multiple or different parts of the system When you need to have only one object Some typical use cases of using a singleton are: The logging class and its subclasses (global point of access for the logging class to send messages to the log) Printer spooler (your application should only have a single instance of the spooler in order to avoid having a conflicting request for the same resource) Managing a connection to a database File manager Retrieving and storing information on external configuration files Read-only singletons storing some global states (user language, time, time zone, application path, and so on) There are several ways to implement singletons. We will look at module-level singleton, classic singletons, and borg singleton. Module-level singleton All modules are singletons by nature because of Python's module importing steps: Check whether a module is already imported. If yes, return it. If not, find a module, initialize it, and return it. Initializing a module means executing a code, including all module-level assignments. When you import the module for the first time, all of the initializations will be done; however, if you try to import the module for the second time, Python will return the initialized module. Thus, the initialization will not be done, and you get a previously imported module with all of its data. So, if you want to quickly make a singleton, use the following steps and keep the shared data as the module attribute. singletone.py: only_one_var = "I'm only one var" module1.py: import single tone print singleton.only_one_var singletone.only_one_var += " after modification" import module2 module2.py: import singletone print singleton.only_one_var Here, if you try to import a global variable in a singleton module and change its value in the module1 module, module2 will get a changed variable. This function is quick and sometimes is all that you need; however, we need to consider the following points: It's pretty error-prone. For example, if you happen to forget the global statements, variables local to the function will be created and, the module's variables won't be changed, which is not what you want. It's ugly, especially if you have a lot of objects that should remain as singletons. They pollute the module namespace with unnecessary variables. They don't permit lazy allocation and initialization; all global variables will be loaded during the module import process. It's not possible to re-use the code because you can not use the inheritance. No special methods and no object-oriented programming benefits at all. Classic singleton In classic singleton in Python, we check whether an instance is already created. If it is created, we return it; otherwise, we create a new instance, assign it to a class attribute, and return it. Let's try to create a dedicated singleton class: class Singleton(object): def __new__(cls): if not hasattr(cls, 'instance'): cls.instance = super(Singleton, cls).__new__(cls) return cls.instance Here, before creating the instance, we check for the special __new__ method, which is called right before __init__ if we had created an instance earlier. If not, we create a new instance; otherwise, we return the already created instance. Let's check how it works: >>> singleton = Singleton() >>> another_singleton = Singleton() >>> singleton is another_singleton True >>> singleton.only_one_var = "I'm only one var" >>> another_singleton.only_one_var I'm only one var Try to subclass the Singleton class with another one. class Child(Singleton): pass If it's a successor of Singleton, all of its instances should also be the instances of Singleton, thus sharing its states. But this doesn't work as illustrated in the following code: >>> child = Child() >>> child is singleton >>> False >>> child.only_one_var AttributeError: Child instance has no attribute 'only_one_var' To avoid this situation, the borg singleton is used. Borg singleton Borg is also known as monostate. In the borg pattern, all of the instances are different, but they share the same state. In the following code , the shared state is maintained in the _shared_state attribute. And all new instances of the Borg class will have this state as defined in the __new__ class method. class Borg(object):    _shared_state = {}    def __new__(cls, *args, **kwargs):        obj = super(Borg, cls).__new__(cls, *args, **kwargs)        obj.__dict__ = cls._shared_state        return obj Generally, Python stores the instance state in the __dict__ dictionary and when instantiated normally, every instance will have its own __dict__. But, here we deliberately assign the class variable _shared_state to all of the created instances. Here is how it works with subclassing: class Child(Borg):    pass>>> borg = Borg()>>> another_borg = Borg()>>> borg is another_borgFalse>>> child = Child()>>> borg.only_one_var = "I'm the only one var">>> child.only_one_varI'm the only one var So, despite the fact that you can't compare objects by their identity, using the is statement, all child objects share the parents' state. If you want to have a class, which is a descendant of the Borg class but has a different state, you can reset shared_state as follows: class AnotherChild(Borg):    _shared_state = {}>>> another_child = AnotherChild()>>> another_child.only_one_varAttributeError: AnotherChild instance has no attribute 'shared_state' Which type of singleton should be used is up to you. If you expect that your singleton will not be inherited, you can choose the classic singleton; otherwise, it's better to stick with borg. Implementation in Python As a practical example, we'll create a simple web crawler that scans a website you open on it, follows all the links that lead to the same website but to other pages, and downloads all of the images it'll find. To do this, we'll need two functions: a function that scans a website for links, which leads to other pages to build a set of pages to visit, and a function that scans a page for images and downloads them. To make it quicker, we'll download images in two threads. These two threads should not interfere with each other, so don't scan pages if another thread has already scanned them, and don't download images that are already downloaded. So, a set with downloaded images and scanned web pages will be a shared resource for our application, and we'll keep it in a singleton instance. In this example, you will need a library for parsing and screen scraping websites named BeautifulSoup and an HTTP client library httplib2. It should be sufficient to install both with either of the following commands: $ sudo pip install BeautifulSoup httplib2 $ sudo easy_install BeautifulSoup httplib2 First of all, we'll create a Singleton class. Let's use the classic singleton in this example: import httplib2import osimport reimport threadingimport urllibfrom urlparse import urlparse, urljoinfrom BeautifulSoup import BeautifulSoupclass Singleton(object):    def __new__(cls):        if not hasattr(cls, 'instance'):             cls.instance = super(Singleton, cls).__new__(cls)        return cls.instance It will return the singleton objects to all parts of the code that request it. Next, we'll create a class for creating a thread. In this thread, we'll download images from the website: class ImageDownloaderThread(threading.Thread):    """A thread for downloading images in parallel."""    def __init__(self, thread_id, name, counter):        threading.Thread.__init__(self)        self.name = name    def run(self):        print 'Starting thread ', self.name        download_images(self.name)        print 'Finished thread ', self.name The following function traverses the website using BFS algorithms, finds links, and adds them to a set for further downloading. We are able to specify the maximum links to follow if the website is too large. def traverse_site(max_links=10):    link_parser_singleton = Singleton()    # While we have pages to parse in queue    while link_parser_singleton.queue_to_parse:        # If collected enough links to download images, return        if len(link_parser_singleton.to_visit) == max_links:            return        url = link_parser_singleton.queue_to_parse.pop()        http = httplib2.Http()        try:            status, response = http.request(url)        except Exception:            continue        # Skip if not a web page        if status.get('content-type') != 'text/html':            continue        # Add the link to queue for downloading images        link_parser_singleton.to_visit.add(url)        print 'Added', url, 'to queue'        bs = BeautifulSoup(response)        for link in BeautifulSoup.findAll(bs, 'a'):            link_url = link.get('href')            # <img> tag may not contain href attribute            if not link_url:                continue            parsed = urlparse(link_url)            # If link follows to external webpage, skip it            if parsed.netloc and parsed.netloc != parsed_root.netloc:                continue            # Construct a full url from a link which can be relative            link_url = (parsed.scheme or parsed_root.scheme) + '://' + (parsed.netloc or parsed_root.netloc) + parsed.path or ''            # If link was added previously, skip it            if link_url in link_parser_singleton.to_visit:                continue            # Add a link for further parsing            link_parser_singleton.queue_to_parse = [link_url] + link_parser_singleton.queue_to_parse The following function downloads images from the last web resource page in the singleton.to_visit queue and saves it to the img directory. Here, we use a singleton for synchronizing shared data, which is a set of pages to visit between two threads: def download_images(thread_name):    singleton = Singleton()    # While we have pages where we have not download images    while singleton.to_visit:        url = singleton.to_visit.pop()        http = httplib2.Http()        print thread_name, 'Starting downloading images from', url        try:            status, response = http.request(url)        except Exception:            continue        bs = BeautifulSoup(response)       # Find all <img> tags        images = BeautifulSoup.findAll(bs, 'img')        for image in images:            # Get image source url which can be absolute or relative            src = image.get('src')            # Construct a full url. If the image url is relative,            # it will be prepended with webpage domain.            # If the image url is absolute, it will remain as is            src = urljoin(url, src)            # Get a base name, for example 'image.png' to name file locally            basename = os.path.basename(src)            if src not in singleton.downloaded:                singleton.downloaded.add(src)                print 'Downloading', src                # Download image to local filesystem                urllib.urlretrieve(src, os.path.join('images', basename))        print thread_name, 'finished downloading images from', url Our client code is as follows: if __name__ == '__main__':    root = 'http://python.org'    parsed_root = urlparse(root)    singleton = Singleton()    singleton.queue_to_parse = [root]    # A set of urls to download images from    singleton.to_visit = set()    # Downloaded images    singleton.downloaded = set()    traverse_site()    # Create images directory if not exists    if not os.path.exists('images'):        os.makedirs('images')    # Create new threads    thread1 = ImageDownloaderThread(1, "Thread-1", 1)    thread2 = ImageDownloaderThread(2, "Thread-2", 2)    # Start new Threads    thread1.start()    thread2.start() Run a crawler using the following command: $ python crawler.py You should get the following output (your output may vary because the order in which the threads access resources is not predictable): If you go to the images directory, you will find the downloaded images there. Summary To learn more about design patterns in depth, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning Python Design Patterns – Second Edition (https://www.packtpub.com/application-development/learning-python-design-patterns-second-edition) Mastering Python Design Patterns (https://www.packtpub.com/application-development/mastering-python-design-patterns) Resources for Article: Further resources on this subject: Python Design Patterns in Depth: The Factory Pattern [Article] Recommending Movies at Scale (Python) [Article] Customizing IPython [Article]
Read more
  • 0
  • 0
  • 43802

article-image-theres-more-to-learning-programming-than-just-writing-code
Richard Gall
15 Nov 2019
8 min read
Save for later

There's more to learning programming than just writing code

Richard Gall
15 Nov 2019
8 min read
Everyone should learn to code, right? If everyone learned programming not only would people have better jobs, the economy would be growing, and ultimately we’d all have far superior lives to the ones we lead now. Except - clearly - that’s just not true. Yes, perhaps that position is a bit of a caricature, but it’s one that isn’t that uncommon. Lawmakers talk about the importance of making programming and coding part of the curriculum, and are keen to make loud and enthusiastic noises about investing in STEM subjects. We need more engineers to power the digital economy, the thinking goes. While introducing children to code certainly isn’t a bad thing, this way of viewing the world is pretty damaging - not least to those already in engineering roles and those organizations that depend on them. This is because is reduces the activity of writing code to something simple. It turns programming, a complex and ultimately deeply human activity into something more machine like. It almost suggests it’s just a question of writing letters and numbers into a code editor and then just watching the whole thing run. Programming might involve working with machines, but in truth its anything but machine-like. For business leaders, failing to understand what programming actually involves can lead to a really poor engineering culture. A reductive view of the work that software engineers do mean increased pressure, more burnout and lower quality software being delivered. In turn, that has a negative impact on the bottom line. It might not be immediately apparent but poor software means code rewrites, poor user experiences, and high turnover of personnel. That costs money because organizations will be spending valuable time and energy trying to fix mistakes of the past. We need to keep an open mind about what it means to "learn programming" However, with a more open minded perspective on what it actually means to be a programmer, and what ‘learning programming’ actually means, you can build a much more productive engineering culture. This involves not only respecting the learning process, but also recognising that learning isn’t about just taking a course or doing a live coding exercise. It does, in fact, involve a much more diverse range of activities. Let’s look at what some of them are. Evaluating software One of the most important parts of a software developers work is evaluating software. This can happen in various ways. Most obviously, technology leaders (CTOs, Principal Architects, Development Leads) have to evaluate different tools and platforms before they implement a project. Questions here will revolve primarily around cost, but it certainly won't be the leadership team's sole concern. Other issues like integration, product capabilities, even the learning curve and level of complexity will need to be considered (will we need to hire specialist engineers or can our existing team pick it up quickly?). Perhaps that all sounds obvious, but too often we forget that this is work that needs to be done. To make these sorts of assessments - which are often business critical - individuals will need a high degree of knowledge. Without it they can’t be confident that they’re making the right decision for the business. In this sense then, learning about technologies is just as important as the process of learning how to use technologies. Some might say it’s even more important. It's not only senior developers and tech leaders that evaluate software Evaluating software is by no means a task limited to those in senior positions. Developers and engineers who spend the majority of their time shipping code will still need to learn about technologies too. They might not be responsible for architecting a new software system or purchasing PaaS products, but they will have to make personal decisions about what tools they use to solve specific problems. This might sometimes be about the tools they use to boost their productivity and better manage their development workflow, but it isn't limited to that. In broad terms it’s about having an open mind about the range of approaches that can be taken to new challenges. This means that all technology professionals need to learn about technologies - how they work, how they compare to one another, and even what the trade offs between them are. This shouldn’t be treated as an additional extra, but instead as a fundamental part of the learning process. Read next: Developers are today’s technology decision makers Programming techniques and design principles When talking about learning it’s easy to fall into a trap where we privilege practice over theory. Theory, certain lines of thinking go, is self-indulgent, unnecessary, and time-consuming. What’s really important is that people can simply start getting their hands dirty and learn by doing. While it’s true that the practical dimension of learning is vital - in technology or any other field - we overlook theory at our peril. In reality, theory and practice should go together. Practice should be a way of illuminating the theory and theory should be a way of explaining why something works the way it does, or why you should do something in a certain way. Think of it this way: if everyone only learned through practice, we’d all be incapable of applying our skills and knowledge to new problems and challenges. We’d be fixed in our mindset, more like machines than creative human beings. For developers and software engineers this is particularly true. By understanding the principles behind how something works, it becomes much easier to apply solutions to new contexts or even reconfigure them in ways that are appropriate and effective. Improving software with design-led principles Programming techniques and philosophies, like functional or object oriented programming, for example, can help developers and engineers to write code in a specific way, helping them to unlock greater performance and efficiency (both personally and from a technical perspective). Similarly, design patterns also provide a way for thinking about your code in a predetermined way in relation to various commonly occurring problems. It’s true that this still requires developers to get close to code. But this is actually a level of abstraction above the practice of writing code that allows developers to think critically about what they do. So, while a good way to learn these sorts of principles is to see what it looks like in practice, it’s still essential for developers to have a robust conceptual understanding of what this means in practice. Understanding users and business needs Software doesn’t exist in a vacuum. On one side there's the business, on the other there's a user. It sounds obvious, but it's essential that technology professionals are sensitive to these two contextual elements. Business needs and user needs are what ultimately make their work meaningful. In practice, this doesn’t mean people working in technology all need to go and take an MBA. But they do need to have a clear conceptual understanding of how software development and software systems should align with the needs of both internal stakeholders (ie. the business), and users. This isn’t always easy to learn, and there’s no manual for how it should be done. However, it fits across the two points we mentioned above. The software we decide to use, and the way we decide to use it will always be informed by the needs of both the business and users. What this means in practice, then, is that learning about software needs to be informed by the wider context of what that software is for, and what a business is trying to achieve. Some technology professionals enter the industry possessing this kind of awareness and sensitivity. Many others, however, do not, and for these people it’s essential that they have the space to understand how the various facets of the work they do are connected to real-life consequences. Writing code doesn’t help you to do that. Taking a step back and understanding the context in which that code is being written can and will. Read next: 6 reasons why employers should pay for their developers’ training and learning resources Conclusion: Great programming requires a combination of theoretical knowledge and practical talent The opposition between theory and practice is false. It doesn’t help anyone. A culture of ‘getting stuff done’ and shipping code regardless is not only bad for individual developers, it can also be damaging at an organizational level. Without a careful consideration of what you’re trying to achieve, how software can help you to do it, and what it requires to execute it effectively, organizations can become prone to error and mistakes. This leads to wasted time and, more importantly, wasted money. While Facebook’s mantra of ‘move fast and break things’ might sound like the defining phrase of the modern tech industry, good developers need both space and resources to think, plan, and conceptualize. This doesn’t mean we all need to go slow. Instead, it means we need to try to empower engineers to do the right thing, not the quick thing. Give your team access to a diverse range of resources to learn everything they need to build better software. Start a Packt for Teams subscription today.
Read more
  • 0
  • 0
  • 43482

article-image-what-matters-on-an-engineering-resume-hacker-rank-report-says-skills-not-certifications
Richard Gall
24 May 2018
6 min read
Save for later

What matters on an engineering resume? Hacker Rank report says skills, not certifications

Richard Gall
24 May 2018
6 min read
Putting together an engineering resume can be a real headache. What should you include? How can you best communicate your experience and skills? Software engineers are constantly under pressure to deliver new projects and fix problems while learning new skills. Documenting the complexity of developer life in a straightforward and marketable manner is a challenge to say the least. Luckily, hiring managers and tech recruiters today recognize just how difficult communicating skill and competency in an engineering resume can be. A report by Hacker Rank revealed that the things that feature on a resume aren't that highly valued by recruiters and hiring managers. However, skills does remain top of the agenda: the question, really, is about how we demonstrate and communicate those skills. The quality of your previous experience matters on an engineering resume Hacker Rank found that hiring managers and tech recruiters value previous experience over everything else. 77% of survey respondents said previous experience was one of the 3 most important qualifications before a formal interview. In second place was years of experience with 46%. The difference between the two is subtle but important; it's offers a useful takeaway for engineers creating an engineering resume. Essentially, the quality of your experience is more important than the quantity of your experience. You need to make sure you communicate the details of your employment experiences. It sounds obvious but it's worth stating: applying for an engineering job isn't just a competition based on who has the most experience. You should explain the nature of the projects you are working on. The skills you used are essential, but being clear about how the project supported wider strategic or tactical goals is also important. This demonstrates not only your skills, but also your contextual awareness. It suggests to a hiring manager or recruiter you not only have the competence, but that you are also a team player with commercial awareness. Certifications aren't that important on your resume One of the most interesting insights from the Hacker Rank report was that both hiring managers and recruiters don't really care about certifications any more. Less than 16% listed it as one of the 3 most important things they look at during the recruitment process. Does this mean, then, that the certification is well and truly over? At this stage, it's hard to tell. But it does point to a wider cultural change that probably has a lot to do with open source. Because change is built into the reality of open source software, certifications are never going to be able to keep up with what's new and important. The things you learned to pass one year will likely be out of date the next. It probably also says something about the nature of technical roles today. Years ago, engineers would start a job knowing what they were going to be using. The toolchains and tech stacks would be relatively stable and consistent. In this context, certification was like a license, proving you understood the various components of a given tool or suite of tools. But today, it's more important for engineers to prove that they are both adaptable and capable of solving a range of different problems. With that in mind, it's essential to demonstrate your flexibility on your engineering resume. Make it clear that you're able to learn new things quickly, and that you can adapt your skill set to the problems you need to solve. You don't need to look good on paper to get the job... but it's going to help Hacker Rank's research also revealed that 75% of recruiters and hiring managers have hired people they initially thought didn't look good on paper. But that doesn't necessarily mean you should stop working on your resume. If anything, what this shows is that if you get your resume right, you could really catch someone's attention. You need to consider everything in your resume. Traditional resumes have a pretty clear structure, whatever job you're applying for, but if Hacker Rank's research tells us anything, it's that a an engineering resume requires a slightly different approach. Personal projects are more important than your portfolio on an engineering resume A further insight from Hacker Rank's report suggests one way you might adopt a different approach to your resume. Responding to the same question as the one we looked at above, 37% said personal projects were one of the 3 most important factors in determining whether to invite a candidate to interview. By contrast, only 22% said portfolio. This seems strange - surely a portfolio offers a deeper insight into someone's professional experience. Personal projects are more like hobbies, right? Personal projects actually tell you so much more about a candidate than a portfolio. A portfolio is largely determined by the work you have been doing. What's more, it's not always that easy to communicate value in a portfolio. Equally, if you've been badly managed, or faced a lack of support, your portfolio might not actually be a good reflection of how good you really are. Personal projects give you an insight into how a person thinks. It shows recruiters what makes an engineer tick. In the workplace your scope for creativity and problem solving might well be limited. With personal projects you're free to test out ideas try new tools. You're able to experiment. So, when you're putting together an engineering resume, make sure you dedicate some time outlining your personal projects. Consider these sorts of questions: Why did you start a project? What did you find interesting? What did you learn? Engineering skills still matter Just because the traditional resume appears to have fallen out of favor, it doesn't mean your skills don't matter. In fact, skill matters more than ever. For a third of hiring managers, skill assessments are the area they want to invest in. This would allow them to judge a candidate's competencies and skills much more effectively than simply looking at a resume. As we've seen, things like personal projects are valuable because they demonstrate skills in a way that is often difficult. They not only prove you have the technical skills you say you have, they also provide a good indication of how you think and how you might approach solving problems. They can help illustrate how you deploy those skills. And when its so easy to learn how to write lines of code (no bad thing, true), showing how you think and apply your skills is a sure fire way to make sure you stand out from the crowd. Read next: How to assess your tech team's skills Are technical skills overrated when hiring tech pros?  ‘Soft’ Skills Every Data Pro Needs
Read more
  • 0
  • 0
  • 42946

article-image-understanding-the-foundation-of-protocol-oriented-design
Expert Network
30 Jun 2021
7 min read
Save for later

Understanding the Foundation of Protocol-oriented Design

Expert Network
30 Jun 2021
7 min read
When Apple announced Swift 2 at the World Wide Developers Conference (WWDC) in 2016, they also declared that Swift was the world’s first protocol-oriented programming (POP) language. From its name, we might assume that POP is all about protocol; however, that would be a wrong assumption. POP is about so much more than just protocol; it is actually a new way of not only writing applications but also thinking about programming. This article is an excerpt from the book Mastering Swift, 6th Edition by Jon Hoffman. In this article, we will discuss a protocol-oriented design and how we can use protocols and protocol extensions to replace superclasses. We will look at how to define animal types for a video game in a protocol-oriented way. Requirements When we develop applications, we usually have a set of requirements that we need to develop against. With that in mind, let’s define the requirements for the animal types that we will be creating in this article: We will have three categories of animals: land, sea, and air. Animals may be members of multiple categories. For example, an alligator can be a member of both the land and sea categories. Animals may attack and/or move when they are on a tile that matches the categories they are in. Animals will start off with a certain number of hit points, and if those hit points reach 0 or less, then they will be considered dead. POP Design We will start off by looking at how we would design the animal types needed and the relationships between them. Figure 1 shows our protocol-oriented design: Figure 1: Protocol-oriented design In this design, we use three techniques: protocol inheritance, protocol composition, and protocol extensions. Protocol inheritance Protocol inheritance is where one protocol can inherit the requirements from one or more additional protocols. We can also inherit requirements from multiple protocols, whereas a class in Swift can have only one superclass. Protocol inheritance is extremely powerful because we can define several smaller protocols and mix/match them to create larger protocols. You will want to be careful not to create protocols that are too granular because they will become hard to maintain and manage. Protocol composition Protocol composition allows types to conform to more than one protocol. With protocol-oriented design, we are encouraged to create multiple smaller protocols with very specific requirements. Let’s look at how protocol composition works. Protocol inheritance and composition are really powerful features but can also cause problems if used wrongly. Protocol composition and inheritance may not seem that powerful on their own; however, when we combine them with protocol extensions, we have a very powerful programming paradigm. Let’s look at how powerful this paradigm is. Protocol-oriented design — putting it all together We will begin by writing the Animal superclass as a protocol: protocol Animal { var hitPoints: Int { get set } } In the Animal protocol, the only item that we are defining is the hitPoints property. If we were putting in all the requirements for an animal in a video game, this protocol would contain all the requirements that would be common to every animal. We only need to add the hitPoints property to this protocol. Next, we need to add an Animal protocol extension, which will contain the functionality that is common for all types that conform to the protocol. Our Animal protocol extension would contain the following code: extension Animal { mutating func takeHit(amount: Int) { hitPoints -= amount } func hitPointsRemaining() -> Int { return hitPoints } func isAlive() -> Bool { return hitPoints > 0 ? true : false } } The Animal protocol extension contains the same takeHit(), hitPointsRemaining(), and isAlive() methods. Any type that conforms to the Animal protocol will automatically inherit these three methods. Now let’s define our LandAnimal, SeaAnimal, and AirAnimal protocols. These protocols will define the requirements for the land, sea, and air animals respectively: protocol LandAnimal: Animal { var landAttack: Bool { get } var landMovement: Bool { get } func doLandAttack() func doLandMovement() } protocol SeaAnimal: Animal { var seaAttack: Bool { get } var seaMovement: Bool { get } func doSeaAttack() func doSeaMovement() } protocol AirAnimal: Animal { var airAttack: Bool { get } var airMovement: Bool { get } func doAirAttack() func doAirMovement() } These three protocols only contain the functionality needed for their particular type of animal. Each of these protocols only contains four lines of code. This makes our protocol design much easier to read and manage. The protocol design is also much safer because the functionalities for the various animal types are isolated in their own protocols rather than being embedded in a giant superclass. We are also able to avoid the use of flags to define the animal category and, instead, define the category of the animal by the protocols it conforms to. In a full design, we would probably need to add some protocol extensions for each of the animal types, but we do not need them for our example here. Now, let’s look at how we would create our Lion and Alligator types using protocol-oriented design: struct Lion: LandAnimal { var hitPoints = 20 let landAttack = true let landMovement = true func doLandAttack() { print(“Lion Attack”) } func doLandMovement() { print(“Lion Move”) } } struct Alligator: LandAnimal, SeaAnimal { var hitPoints = 35 let landAttack = true let landMovement = true let seaAttack = true let seaMovement = true func doLandAttack() { print(“Alligator Land Attack”) } func doLandMovement() { print(“Alligator Land Move”) } func doSeaAttack() { print(“Alligator Sea Attack”) } func doSeaMovement() { print(“Alligator Sea Move”) } } Notice that we specify that the Lion type conforms to the LandAnimal protocol, while the Alligator type conforms to both the LandAnimal and SeaAnimal protocols. As we saw previously, having a single type that conforms to multiple protocols is called protocol composition and is what allows us to use smaller protocols, rather than one giant monolithic superclass. Both the Lion and Alligator types originate from the Animal protocol; therefore, they will inherit the functionality added with the Animal protocol extension. If our animal type protocols also had extensions, then they would also inherit the function added by those extensions. With protocol inheritance, composition, and extensions, our concrete types contain only the functionality needed by the particular animal types that they conform to. Since the Lion and Alligator types originate from the Animal protocol, we can use polymorphism. Let’s look at how this works: var animals = [Animal]() animals.append(Alligator()) animals.append(Alligator()) animals.append(Lion()) for (index, animal) in animals.enumerated() { if let _ = animal as? AirAnimal { print(“Animal at \(index) is Air”) } if let _ = animal as? LandAnimal { print(“Animal at \(index) is Land”) } if let _ = animal as? SeaAnimal { print(“Animal at \(index) is Sea”) } } In this example, we create an array that will contain Animal types named animals. We then create two instances of the Alligator type and one instance of the Lion type that are added to the animals array. Finally, we use a for-in loop to loop through the array and print out the animal type based on the protocol that the instance conforms to. Upgrade your knowledge and become an expert in the latest version of the Swift programming language with Mastering Swift 5.3, 6th Edition by Jon Hoffman. About Jon Hoffman has over 25 years of experience in the field of information technology. He has worked in the areas of system administration, network administration, network security, application development, and architecture. Currently, Jon works as an Enterprise Software Manager for Syn-Tech Systems.
Read more
  • 0
  • 0
  • 42891
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-understanding-microservices
Packt
22 Jun 2017
19 min read
Save for later

Understanding Microservices

Packt
22 Jun 2017
19 min read
This article by Tarek Ziadé, author of the book Python Microservices Development explains the benefits and implementation of microservices with Python. While the microservices architecture looks more complicated than its monolithic counterpart, its advantages are multiple. It offers the following benefits. (For more resources related to this topic, see here.) Separation of concerns First of all, each microservice can be developed independently by a separate team. For instance, building a reservation service can be a full project on its own. The team in charge can make it in whatever programming language and database, as long as it has a well-documented HTTP API. That also means the evolution of the app is more under control than with monoliths. For example, if the payment system changes its underlying interactions with the bank, the impact is localized inside that service and the rest of the application stays stable and under control. This loose coupling improves a lot the overall project velocity as we're applying at the service level a similar philosophy than the single responsibility principle. The single responsibility principle was defined by Robert Martin to explain that a class should have only one reason to change - in other words, each class should be providing a single, well-defined feature. Applied to microservices, it means that we want to make sure that each microservice focuses on a single role. Smaller projects The second benefit is breaking the complexity of the project. When you are adding a feature to an application like the PDF reporting, even if you are doing it cleanly, you are making the base code bigger, more complicated and sometimes slower. Building that feature in a separate application avoids this problem, and makes it easier to write it with whatever tools you want. You can refactor it often and shorten your release cycles, and stay on the top of things. The growth of the application remains under your control. Dealing with a smaller project also reduces risks when improving the application: if a team wants to try out the latest programming language or framework, they can iterate quickly on a prototype that implements the same microservice API, try it out, and decide whether or not to stick with it. One real-life example in mind is the Firefox Sync storage microservice. There are currently some experiments to switch from the current Python+MySQL implementation to a Go based one that stores users data in standalone SQLite databases. That prototype is highly experimental, but since we have isolated the storage feature in a microservice with a well-defined HTTP API, it's easy enough to give it a try with a small subset of the user base. Scaling and deployment Last, having your application split into components makes it easier to scale depending on your constraints. Let's say you are starting to get a lot of customers that are booking hotels daily, and the PDF generation is starting to heat up the CPUs. You can deploy that specific microservice in some servers that have bigger CPUs. Another typical example is RAM-consuming microservices like the ones that are interacting with memory databases like Redis or Memcache. You could tweak your deployments consequently by deploying them on servers with less CPU and a lot more RAM. To summarize microservices benefits: A team can develop each microservice independently, and use whatever technological stack makes sense. They can define a custom release cycle. The tip of the iceberg is its language agnostic HTTP API. Developers break the application complexity into logical components. Each microservice focuses on doing one thing well. Since microservices are standalone applications, there's a finer control on deployments, which makes scaling easier. Microservices architectures are good at solving a lot of the problems that may arise once your application is starting to grow. Although, we need to be aware of some of the new issues they also bring in practice. Implementing microservices with Python Python is an amazingly versatile language. As you probably already know, it's used to build many different kinds of applications, from simple system scripts that perform tasks on a server, to large object-oriented applications that run services for millions of users. According to a study conducted by Philip Guo in 2014, published in the Association for Computing Machinery (ACM) website, Python has surpassed Java in top U.S. universities and is the most popular language to learn Computer Science. This trend is also true in the software industry. Python sits now in the top 5 languages in the TIOBE index (http://www.tiobe.com/tiobe-index/), and it's probably even bigger in the web development land since languages like C are rarely used as main languages to build web applications. However, some developers criticize Python for being slow and unfit for building efficient web services. Python is slow, and this is undeniable. But it's still is a language of choice for building microservices, and many major companies are happily using it. This section will give you some background on the different ways you can write microservices using Python, some insights on asynchronous versus synchronous programming, and conclude with some details on Python performances. It's composed of 4 parts: The WSGI standard Greenlet & Gevent Twisted & Tornado asyncio Language performances The WSGI standard What strikes the most web developers that are starting with Python is how easy it is to get a web application up and running. The Python web community has created a standard inspired from the Common Gateway Interface (CGI) called Web Server Gateway Interface (WSGI) that simplifies a lot how you can write a Python application which goal is to serve HTTP requests. When your code is using that standard, your project can be executed by standard web servers like Apache or NGinx, using WSGI extensions like uwsgi or mod_wsgi. Your application just has to deal with incoming requests and send back JSON responses, and Python includes all that goodness in its standard library. You can create a fully functional microservice that returns the server's local time with a vanilla Python module of fewer than ten lines: import JSON import time def application(environ, start_response): headers = [('Content-type', 'application/json')] start_response('200 OK', headers) return bytes(json.dumps({'time': time.time()}), 'utf8') Since its introduction, the WSGI protocol became an essential standard and the Python web community widely adopted it. Developers wrote middlewares, which are functions you can hook before or after the WSGI application function itself, to do something within the environment. Some web frameworks were created specifically around that standard, like Bottle (http://bottlepy.org) - and soon enough, every framework out there could be used through WSGI in a way or another. The biggest problem with WSGI though is its synchronous nature. The application function you see above is called exactly once per incoming request, and when the function returns, it has to send back the response. That means that every time you are calling the function, it will block until the response is ready. And writing microservices means your code will be waiting for responses from various network resources all the time. In other words, your application will idle and just block the client until everything is ready. That's an entirely okay behavior for HTTP APIs. We're not talking about building bidirectional applications like web socket based ones. But what happens when you have several incoming requests that are calling your application at the same time? WSGI servers will let you run a pool of threads to serve several requests concurrently. But you can't run thousands of them, and as soon as the pool is exhausted, the next request will be blocking even if your microservice is doing nothing but idling and waiting for backend services responses. That's one of the reasons why non-WSGI frameworks like Twisted, Tornado and in Javascript land Node.js became very successful - it's fully async. When you're coding a Twisted application, you can use callbacks to pause and resume the work done to build a response. That means you can accept new requests and start to treat them. That model dramatically reduces the idling time in your process. It can serve thousands of concurrent requests. Of course, that does not mean the application will return each single response faster. It just means one process can accept more concurrent requests and juggle between them as the data is getting ready to be sent back. There's no simple way with the WSGI standard to introduce something similar, and the community has debated for years to come up with a consensus - and failed. The odds are that the community will eventually drop the WSGI standard for something else. In the meantime, building microservices with synchronous frameworks is still possible and completely fine if your deployments take into account the one request == one thread limitation of the WSGI standard. There's, however, one trick to boost synchronous web applications: greenlets. Greenlet & Gevent The general principle of asynchronous programming is that the process deals with several concurrent execution contexts to simulate parallelism. Asynchronous applications are using an event loop that pauses and resumes execution contexts when an event is triggered - only one context is active, and they take turns. Explicit instruction in the code will tell the event loop that this is where it can pause the execution. When that occurs, the process will look for some other pending work to resume. Eventually, the process will come back to your function and continue it where it stopped - moving from an execution context to another is called switching. The Greenlet project (https://github.com/python-greenlet/greenlet) is a package based on the Stackless project, a particular CPython implementation, and provides greenlets. Greenlets are pseudo-threads that are very cheap to instantiate, unlike real threads, and that can be used to call python functions. Within those functions, you can switch and give back the control to another function. The switching is done with an event loop and allows you to write an asynchronous application using a Thread-like interface paradigm. Here's an example from the Greenlet documentation def test1(x, y): z = gr2.switch(x+y) print z def test2(u): print u gr1.switch(42) gr1 = greenlet(test1) gr2 = greenlet(test2) gr1.switch("hello", " world") The two greenlets are explicitly switching from one to the other. For building microservices based on the WSGI standard, if the underlying code was using greenlets we could accept several concurrent requests and just switch from one to another when we know a call is going to block the request - like performing a SQL query. Although, switching from one greenlet to another has to be done explicitly, and the resulting code can quickly become messy and hard to understand. That's where Gevent can become very useful. The Gevent project (http://www.gevent.org/) is built on the top of Greenlet and offers among other things an implicit and automatic way of switching between greenlets. It provides a cooperative version of the socket module that will use greenlets to automatically pause and resume the execution when some data is made available in the socket. There's even a monkey patch feature that will automatically replace the standard lib socket with Gevent's version. That makes your standard synchronous code magically asynchronous every time it uses sockets - with just one extra line. from gevent import monkey; monkey.patch_all() def application(environ, start_response): headers = [('Content-type', 'application/json')] start_response('200 OK', headers) # ...do something with sockets here... return result This implicit magic comes with a price, though. For Gevent to work well, all the underlying code needs to be compatible with the patching Gevent is doing. Some packages from the community will continue to block or even have unexpected results because of this. In particular, if they use C extensions and bypass some of the features of the standard library Gevent patched. But for most cases, it works well. Projects that are playing well with Gevent are dubbed "green," and when a library is not functioning well, and the community asks its authors to "make it green," it usually happens. That's what was used to scale the Firefox Sync service at Mozilla for instance. Twisted and Tornado If you are building microservices where increasing the number of concurrent requests you can hold is important, it's tempting to drop the WSGI standard and just use an asynchronous framework like Tornado (http://www.tornadoweb.org/) or Twisted (https://twistedmatrix.com/trac/). Twisted has been around for ages. To implement the same microservices you need to write a slightly more verbose code: import time from twisted.web import server, resource from twisted.internet import reactor, endpoints class Simple(resource.Resource): isLeaf = True def render_GET(self, request): request.responseHeaders.addRawHeader(b"content-type", b"application/json") return bytes(json.dumps({'time': time.time()}), 'utf8') site = server.Site(Simple()) endpoint = endpoints.TCP4ServerEndpoint(reactor, 8080) endpoint.listen(site) reactor.run() While Twisted is an extremely robust and efficient framework, it suffers from a few problems when building HTTP microservices: You need to implement each endpoint in your microservice with a class derived from a Resource class, and that implements each supported method. For a few simple APIs, it adds a lot of boilerplate code. Twisted code can be hard to understand & debug due to its asynchronous nature. It's easy to fall into callback hell when you're chaining too many functions that are getting triggered successively one after the other - and the code can get messy Properly testing your Twisted application is hard, and you have to use Twisted-specific unit testing model. Tornado is based on a similar model but is doing a better job in some areas. It has a lighter routing system and does everything possible to make the code closer to plain Python. Tornado is also using a callback model, so debugging can be hard. But both frameworks are working hard at bridging the gap to rely on the new async features introduced in Python 3. asyncio When Guido van Rossum started to work on adding async features in Python 3, part of the community pushed for a Gevent-like solution because it made a lot of sense to write applications in a synchronous, sequential fashion - rather than having to add explicit callbacks like in Tornado or Twisted. But Guido picked the explicit technique and experimented in a project called Tulip that Twisted inspired. Eventually, asyncio was born out of that side project and added into Python. In hindsight, implementing an explicit event loop mechanism in Python instead of going the Gevent way makes a lot of sense. The way the Python core developers coded asyncio and how they elegantly extended the language with the async and await keywords to implement coroutines, made asynchronous applications built with vanilla Python 3.5+ code look very elegant and close to synchronous programming. By doing this, Python did a great job at avoiding the callback syntax mess we sometimes see in Node.js or Twisted (Python 2) applications. And beyond coroutines, Python 3 has introduced a full set of features and helpers in the asyncio package to build asynchronous applications, see https://docs.python.org/3/library/asyncio.html. Python is now as expressive as languages like Lua to create coroutine-based applications, and there are now a few emerging frameworks that have embraced those features and will only work with Python 3.5+ to benefit from this. KeepSafe's aiohttp (http://aiohttp.readthedocs.io) is one of them, and building the same microservice, fully asynchronous, with it would simply be these few elegant lines. from aiohttp import web import time async def handle(request): return web.json_response({'time': time.time()}) if __name__ == '__main__': app = web.Application() app.router.add_get('/', handle) web.run_app(app) In this small example, we're very close to how we would implement a synchronous app. The only hint we're async is the async keyword marking the handle function as being a coroutine. And that's what's going to be used at every level of an async Python app going forward. Here's another example using aiopg - a Postgresql lib for asyncio. From the project documentation: import asyncio import aiopg dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1' async def go(): pool = await aiopg.create_pool(dsn) async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute("SELECT 1") ret = [] async for row in cur: ret.append(row) assert ret == [(1,)] loop = asyncio.get_event_loop() loop.run_until_complete(go()) With a few async and await prefixes, the function that's performing a SQL query and send back the result looks a lot like a synchronous function. But asynchronous frameworks and libraries based on Python 3 are still emerging, and if you are using asyncio or a framework like aiohttp, you will need to stick with particular asynchronous implementations for each feature you need. If you require using a library that is not asynchronous in your code, using it from your asynchronous code means you will need to go through some extra and challenging work if you want to prevent blocking the event loop. If your microservices are dealing with a limited number of resources, it could be manageable. But it's probably a safer bet at this point (2017) to stick with a synchronous framework that's been around for a while rather than an asynchronous one. Let's enjoy the existing ecosystem of mature packages, and wait until the asyncio ecosystem gets more sophisticated. And there are many great synchronous frameworks to build microservices with Python, like Bottle, Pyramid with Cornice or Flask. Language performances In the previous sections we've been through the two different ways to write microservices - asynchronous vs. synchronous, and whatever technique you are using, the speed of Python is directly impacting the performance of your microservice. Of course, everyone knows Python is slower than Java or Go - but execution speed is not always the top priority. A microservice is often a thin layer of code that is sitting most of its life waiting for some network responses from other services. Its core speed is usually less important than how fast your SQL queries will take to return from your Postgres server because the latter will represent most of the time spent to build the response. But wanting an application that's as fast as possible is legitimate. One controversial topic in the Python community around speeding up the language is how the Global Interpreter Lock (GIL) mutex can ruin performances because multi-threaded applications cannot use several processes. The GIL has good reasons to exist. It protects non thread-safe parts of the CPython interpreter and exists in other languages like Ruby. And all attempts to remove it so far have failed to produce a faster CPython implementation. Larry Hasting is working on a GIL-free CPython project called Gilectomy - https://github.com/larryhastings/gilectomy - its minimal goal is to come up with a GIL-free implementation that can run a single-threaded application as fast as CPython. As of today (2017), this implementation is still slower that CPython. But it's interesting to follow this work and see if it reaches speed parity one day. That would make a GIL-free CPython very appealing. For microservices, besides preventing the usage of multiple cores in the same process, the GIL will slightly degrade performances on high load, because of the system calls overhead introduced by the mutex. Although, all the scrutiny around the GIL had one beneficial impact: some work has been done in the past years to reduce its contention in the interpreter, and in some area, Python performances have improved a lot. But bear in mind that even if the core team removes the GIL, Python is an interpreted language and the produced code will never be very efficient at execution time. Python provides the dis module if you are interested to see how the interpreter decomposes a function. In the example below, the interpreter will decompose a simple function that yields incremented values from a sequence in no less than 29 steps! >>> def myfunc(data): ... for value in data: ... yield value + 1 ... >>> import dis >>> dis.dis(myfunc) 2 0 SETUP_LOOP 23 (to 26) 3 LOAD_FAST 0 (data) 6 GET_ITER >> 7 FOR_ITER 15 (to 25) 10 STORE_FAST 1 (value) 3 13 LOAD_FAST 1 (value) 16 LOAD_CONST 1 (1) 19 BINARY_ADD 20 YIELD_VALUE 21 POP_TOP 22 JUMP_ABSOLUTE 7 >> 25 POP_BLOCK >> 26 LOAD_CONST 0 (None) 29 RETURN_VALUE A similar function written in a statically compiled language will dramatically reduce the number of operations required to produce the same result. There are ways to speed up Python execution, though. One is to write part of your code into compiled code by building C extensions or using a static extension of the language like Cython (http://cython.org/) - but that makes your code more complicated. Another solution, which is the most promising one, is by simply running your application using the PyPy interpreter (http://pypy.org/). PyPy implements a Just-In-Time compiler (JIT). This compiler is directly replacing at run time pieces of Python with machine code that can be directly used by the CPU. The whole trick for the JIT is to detect in real time, ahead of the execution, when and how to do it. Even if PyPy is always a few Python versions behind CPython, it reached a point where you can use it in production, and its performances can be quite amazing. In one of our projects at Mozilla that needs fast execution, the PyPy version was almost as fast as the Go version, and we've decided to use Python there instead. The Pypy Speed Center website is a great place to look at how PyPy compares to CPython - http://speed.pypy.org/ However, if your program uses C extensions, you will need to recompile them for PyPy, and that can be a problem. In particular, if other developers maintain some of the extensions you are using. But if you are building your microservice with a standard set of libraries, the chances are that will it work out of the box with the PyPy interpreter, so that's worth a try. In any case, for most projects, the benefits of Python and its ecosystem largely surpasses the performances issues described in this section because the overhead in a microservice is rarely a problem. Summary In this article we saw that Python is considered to be one of the best languages to write web applications, and therefore microservices - for the same reasons, it's a language of choice in other areas and also because it provides tons of mature frameworks and packages to do the work. Resources for Article: Further resources on this subject: Inbuilt Data Types in Python [article] Getting Started with Python Packages [article] Layout Management for Python GUI [article]
Read more
  • 0
  • 0
  • 42735

article-image-nhibernate-30-testing-using-nhibernate-profiler-and-sqlite
Packt
06 Oct 2010
6 min read
Save for later

NHibernate 3.0: Testing Using NHibernate Profiler and SQLite

Packt
06 Oct 2010
6 min read
  NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on NHibernate, see here.) Using NHibernate Profiler NHibernate Profiler from Hibernating Rhinos is the number one tool for analyzing and visualizing what is happening inside your NHibernate application, and for discovering issues you may have. In this recipe, I'll show you how to get up and running with NHibernate Profiler. Getting ready Download NHibernate Profiler from http://nhprof.com, and unzip it. As it is a commercial product, you will also need a license file. You may request a 30-day trial license from the NHProf website. Using our Eg.Core model, set up a new NHibernate console application with log4net. (Download code). How to do it... Add a reference to HibernatingRhinos.Profiler.Appender.dll from the NH Profiler download. In the session-factory element of App.config, set the property generate_statistics to true. Add the following code to your Main method: log4net.Config.XmlConfigurator.Configure();HibernatingRhinos.Profiler.Appender. NHibernate.NHibernateProfiler.Initialize();var nhConfig = new Configuration().Configure();var sessionFactory = nhConfig.BuildSessionFactory();using (var session = sessionFactory.OpenSession()){ var books = from b in session.Query<Book>() where b.Author == "Jason Dentler" select b; foreach (var book in books) Console.WriteLine(book.Name);} Run NHProf.exe from the NH Profiler download, and activate the license. Build and run your console application. Check the NH Profiler. It should look like the next screenshot. Notice the gray dots indicating alerts next to the Session #1 and Recent Statements. Select Session #1 from the Sessions list at the top left pane. Select the statement from the top right pane. Notice the SQL statement in the following screenshot: Click on See the 1 row(s) resulting from this statement. Enter your database connection string in the field provided, and click on OK. Close the query results window. Switch to the Alerts tab, and notice the alert: Use of implicit transaction is discouraged. Click on the Read more link for more information and suggested solutions to this particular issue. Switch to the Stack Trace tab, as shown in the next screenshot: Double-click on the NHProfTest.NHProfTest.Program.Main stack frame to jump to that location inside Visual Studio. Using the following code, wrap the foreach loop in a transaction and commit the transaction: using (var tx = session.BeginTransaction()){ foreach (var book in books) Console.WriteLine(book.Name); tx.Commit();} In NH Profiler, right-click on Sessions on the top left pane, and select Clear All Sessions. Build and run your application. Check NH Profiler for alerts. How it works... NHibernate Profiler uses a custom log4net appender to capture data about NHibernate activities inside your application and transmit that data to the NH Profiler application. Setting generate_statistics allows NHibernate to capture many key data points. These statistics are displayed in the lower, left-hand side of the pane of NHibernate Profiler. We initialize NHibernate Profiler with a call to NHibernateProfiler.Initialize(). For best results, do this when your application begins, just after you have configured log4net. There's more... NHibernate Profiler also supports offline and remote profiling, as well as command-line options for use with build scripts and continuous integration systems. In addition to NHibernate warnings and errors, NH Profiler alerts us to 12 common misuses of NHibernate, which are as follows: Transaction disposed without explicit rollback or commit: If no action is taken, transactions will rollback when disposed. However, this often indicates a missing commit rather than a desire to rollback the transaction Using a single session on multiple threads is likely a bug: A Session should only be used by one thread at a time. Sharing a session across threads is usually a bug, not an explicit design choice with proper locking. Use of implicit transaction is discouraged: Nearly all session activity should happen inside an NHibernate transaction. Excessive number of rows: In nearly all cases, this indicates a poorly designed query or bug. Large number of individual writes: This indicates a failure to batch writes, either because adonet.batch_size is not set, or possibly because an Identity-type POID generator is used, which effectively disables batching. Select N+1: This alert indicates a particular type of anti-pattern where, typically, we load and enumerate a list of parent objects, lazy-loading their children as we move through the list. Instead, we should eagerly fetch those children before enumerating the list Superfluous updates, use inverse="true": NH Profiler detected an unnecessary update statement from a bi-directional one-to-many relationship. Use inverse="true" on the many side (list, bag, set, and others) of the relationship to avoid this. Too many cache calls per session: This alert is targeted particularly at applications using a distributed (remote) second-level cache. By design, NHibernate does not batch calls to the cache, which can easily lead to hundreds of slow remote calls. It can also indicate an over reliance on the second-level cache, whether remote or local. Too many database calls per session: This usually indicates a misuse of the database, such as querying inside a loop, a select N+1 bug, or an excessive number of writes. Too many joins: A query contains a large number of joins. When executed in a batch, multiple simple queries with only a few joins often perform better than a complex query with many joins. This alert can also indicate unexpected Cartesian products. Unbounded result set: NH Profiler detected a query without a row limit. When the application is moved to production, these queries may return huge result sets, leading to catastrophic performance issues. As insurance against these issues, set a reasonable maximum on the rows returned by each query Different parameter sizes result in inefficient query plan cache usage: NH Profiler detected two identical queries with different parameter sizes. Each of these queries will create a query plan. This problem grows exponentially with the size and number of parameters used. Setting prepare_sql to true allows NHibernate to generate queries with consistent parameter sizes. See also Configuring NHibernate with App.config Configuring log4net logging
Read more
  • 0
  • 0
  • 42449

article-image-multithreading-qt
Packt
16 Nov 2016
13 min read
Save for later

Multithreading with Qt

Packt
16 Nov 2016
13 min read
Qt has its own cross-platform implementation of threading. In this article by Guillaume Lazar and Robin Penea, authors of the book Mastering Qt 5, we will study how to use Qt and the available tools provided by the Qt folks. (For more resources related to this topic, see here.) More specifically, we will cover the following: Understanding the QThread framework in depth The worker model and how you can offload a process from the main thread An overview of all the available threading technologies in Qt Discovering QThread Qt provides a sophisticated threading system. We assume that you already know threading basics and the associated issues (deadlocks, threads synchronization, resource sharing, and so on) and we will focus on how Qt implements it. The QThread is the central class for of the Qt threading system. A QThread instance manages one thread of execution within the program. You can subclass QThread to override the run() function, which will be executed in the QThread class. Here is how you can create and start a QThread: QThread thread; thread.start(); The start() function calling will automatically call the run() function of thread and emit the started() signal. Only at this point, the new thread of execution will be created. When run() is completed, thread will emit the finished() signal. This brings us to a fundamental aspect of QThread: it works seamlessly with the signal/slot mechanism. Qt is an event-driven framework, where a main event loop (or the GUI loop) processes events (user input, graphical, and so on) to refresh the UI. Each QThread comes with its own event loop that can process events outside the main loop. If not overridden, run() calls the QThread::exec() function, which starts the thread's event loop. You can also override QThread and call exec(), as follows: class Thread : public QThread { Q_OBJECT protected: void run() { Object* myObject = new Object(); connect(myObject, &Object::started, this, &Thread::doWork); exec(); } private slots: void doWork(); }; The started()signal will be processed by the Thread event loop only upon the exec() call. It will block and wait until QThread::exit() is called. A crucial thing to note is that a thread event loop delivers events for all QObject classes that are living in that thread. This includes all objects created in that thread or moved to that thread. This is referred to as the thread affinity of an object. Here's an example: class Thread : public QThread { Thread() : mObject(new QObject()) { } private : QObject* myObject; }; // Somewhere in MainWindow Thread thread; thread.start(); In this snippet, myObject is constructed in the Thread constructor, which is created in turn in MainWindow. At this point, thread is living in the GUI thread. Hence, myObject is also living in the GUI thread. An object created before a QCoreApplication object has no thread affinity. As a consequence, no event will be dispatched to it. It is great to be able to handle signals and slots in our own QThread, but how can we control signals across multiple threads? A classic example is a long running process that is executed in a separate thread that has to notify the UI to update some state: class Thread : public QThread { Q_OBJECT void run() { // long running operation emit result("I <3 threads"); } signals: void result(QString data); }; // Somewhere in MainWindow Thread* thread = new Thread(this); connect(thread, &Thread::result, this, &MainWindow::handleResult); connect(thread, &Thread::finished, thread, &QObject::deleteLater); thread->start(); Intuitively, we assume that the first connect function sends the signal across multiple threads (to have a result available in MainWindow::handleResult), whereas the second connect function should work on thread's event loop only. Fortunately, this is the case due to a default argument in the connect() function signature: the connection type. Let's see the complete signature: QObject::connect( const QObject *sender, const char *signal, const QObject *receiver, const char *method, Qt::ConnectionType type = Qt::AutoConnection) The type variable takes Qt::AutoConnection as a default value. Let's review the possible values of Qt::ConectionType enum as the official Qt documentation states: Qt::AutoConnection: If the receiver lives in the thread that emits the signal, Qt::DirectConnection is used. Otherwise, Qt::QueuedConnection is used. The connection type is determined when the signal is emitted. Qt::DirectConnection: This slot is invoked immediately when the signal is emitted. The slot is executed in the signaling thread. Qt::QueuedConnection: The slot is invoked when control returns to the event loop of the receiver's thread. The slot is executed in the receiver's thread. Qt::BlockingQueuedConnection: This is the same as Qt::QueuedConnection, except that the signaling thread blocks until the slot returns. This connection must not be used if the receiver lives in the signaling thread or else the application will deadlock. Qt::UniqueConnection: This is a flag that can be combined with any one of the preceding connection types, using a bitwise OR element. When Qt::UniqueConnection is set, QObject::connect() will fail if the connection already exists (that is, if the same signal is already connected to the same slot for the same pair of objects). When using Qt::AutoConnection, the final ConnectionType is resolved only when the signal is effectively emitted. If you look again at our example, the first connect(): connect(thread, &Thread::result, this, &MainWindow::handleResult); When the result() signal will be emitted, Qt will look at the handleResult() thread affinity, which is different from the thread affinity of the result() signal. The thread object is living in MainWindow (remember that it has been created in MainWindow), but the result() signal has been emitted in the run() function, which is running in a different thread of execution. As a result, a Qt::QueuedConnection function will be used. We will now take a look at the second connect(): connect(thread, &Thread::finished, thread, &QObject::deleteLater); Here, deleteLater() and finished() live in the same thread, therefore, a Qt::DirectConnection will be used. It is crucial that you understand that Qt does not care about the emitting object thread affinity, it looks only at the signal's "context of execution." Loaded with this knowledge, we can take another look at our first QThread example to have a complete understanding of this system: class Thread : public QThread { Q_OBJECT protected: void run() { Object* myObject = new Object(); connect(myObject, &Object::started, this, &Thread::doWork); exec(); } private slots: void doWork(); }; When Object::started() is emitted, a Qt::QueuedConnection function will be used. his is where your brain freezes. The Thread::doWork() function lives in another thread than Object::started(), which has been created in run(). If the Thread has been instantiated in the UI Thread, this is where doWork() would have belonged. This system is powerful but complex. To make things more simple, Qt favors the worker model. It splits the threading plumbing from the real processing. Here is an example: class Worker : public QObject { Q_OBJECT public slots: void doWork() { emit result("workers are the best"); } signals: void result(QString data); }; // Somewhere in MainWindow QThread* thread = new Thread(this); Worker* worker = new Worker(); worker->moveToThread(thread); connect(thread, &QThread::finished, worker, &QObject::deleteLater); connect(this, &MainWindow::startWork, worker, &Worker::doWork); connect(worker, &Worker::resultReady, this, handleResult); thread->start(); // later on, to stop the thread thread->quit(); thread->wait(); We start by creating a Worker class that has the following: A doWork()slot that will have the content of our old QThread::run() function A result()signal that will emit the resulting data Next, in MainWindow, we create a simple thread and an instance of Worker. The worker->moveToThread(thread) function is where the magic happens. It changes the affinity of the worker object. The worker now lives in the thread object. You can only push an object from your current thread to another thread. Conversely, you cannot pull an object that lives in another thread. You cannot change the thread affinity of an object if the object does not live in your thread. Once thread->start() is executed, we cannot call worker->moveToThread(this) unless we are doing it from this new thread. After that, we will use three connect() functions: We handle worker life cycle by reaping it when the thread is finished. This signal will use a Qt::DirectConnection function. We start the Worker::doWork() upon a possible UI event. This signal will use a Qt::QueuedConnection. We process the resulting data in the UI thread with handleResult(). This signal will use a Qt::QueuedConnection. To sum up, QThread can be either subclassed or used in conjunction with a worker class. Generally, the worker approach is favored because it separates more cleanly the threading affinity plumbing from the actual operation you want to execute in parallel. Flying over Qt multithreading technologies Built upon QThread, several threading technologies are available in Qt. First, to synchronize threads, the usual approach is to use a mutual exclusion (mutex) for a given resource. Qt provides it by the mean of the QMutex class. Its usage is straightforward: QMutex mutex; int number = 1; mutex.lock(); number *= 2; mutex.unlock(); From the mutex.lock() instruction, any other thread trying to lock the mutex object will wait until mutex.unlock() has been called. The locking/unlocking mechanism is error prone in complex code. You can easily forget to unlock a mutex in a specific exit condition, causing a deadlock. To simplify this situation, Qt provides a QMutexLocker that should be used where the QMutex needs to be locked: QMutex mutex; QMutexLocker locker(&mutex); int number = 1; number *= 2; if (overlyComplicatedCondition) { return; } else if (notSoSimple) { return; } The mutex is locked when the locker object is created, and it will be unlocked when locker is destroyed, for example, when it goes out of scope. This is the case for every condition we stated where the return statement appears. It makes the code simpler and more readable. If you need to create and destroy threads frequently, managing QThread instances by hand can become cumbersome. For this, you can use the QThreadPool class, which manages a pool of reusable QThreads. To execute code within threads managed by a QThreadPool, you will use a pattern very close to the worker we covered earlier. The main difference is that the processing class has to extend the QRunnable class. Here is how it looks: class Job : public QRunnable { void run() { // long running operation } } Job* job = new Job(); QThreadPool::globalInstance()->start(job); Just override the run() function and ask QThreadPool to execute your job in a separate thread. The QThreadPool::globalInstance() function is a static helper function that gives you access to an application global instance. You can create your own QThreadPool class if you need to have a finer control over the QThreadPool life cycle. Note that QThreadPool::start() takes the ownership of the job object and will automatically delete it when run() finishes. Watch out, this does not change the thread affinity like QObject::moveToThread() does with workers! A QRunnable class cannot be reused, it has to be a freshly baked instance. If you fire up several jobs, QThreadPool automatically allocates the ideal number of threads based on the core count of your CPU. The maximum number of threads that the QThreadPool class can start can be retrieved with QThreadPool::maxThreadCount(). If you need to manage threads by hand, but you want to base it on the number of cores of your CPU, you can use the handy static function, QThreadPool::idealThreadCount(). Another approach to multithreaded development is available with the Qt Concurrent framework. It is a higher level API that avoids the use of mutexes/locks/wait conditions and promotes the distribution of the processing among CPU cores. Qt Concurrent relies of the QFuture class to execute a function and expect a result later on: void longRunningFunction(); QFuture<void> future = QtConcurrent::run(longRunningFunction); The longRunningFunction() will be executed in a separated thread obtained from the default QThreadPool class. To pass parameters to a QFuture class and retrieve the result of the operation, use the following code: QImage processGrayscale(QImage& image); QImage lenna; QFuture<QImage> future = QtConcurrent::run(processGrayscale, lenna); QImage grayscaleLenna = future.result(); Here, we pass lenna as a parameter to the processGrayscale() function. Because we want a QImage as a result, we declare QFuture with the template type QImage. After that, future.result() blocks the current thread and waits for the operation to be completed to return the final QImage template type. To avoid blocking, QFutureWatcher comes to the rescue: QFutureWatcher<QImage> watcher; connect(&watcher, &QFutureWatcher::finished, this, &QObject::handleGrayscale); QImage processGrayscale(QImage& image); QImage lenna; QFuture<QImage> future = QtConcurrent::run(processImage, lenna); watcher.setFuture(future); We start by declaring a QFutureWatcher with the template argument matching the one used for QFuture. Then, simply connect the QFutureWatcher::finished signal to the slot you want to be called when the operation has been completed. The last step is to the tell the watcher to watch the future object with watcher.setFuture(future). This statement looks almost like it's coming from a science fiction movie. Qt Concurrent also provides a MapReduce and FilterReduce implementation. MapReduce is a programming model that basically does two things: Map or distribute the processing of datasets among multiple cores of the CPU Reduce or aggregate the results to provide it to the caller check styleThis technique has been first promoted by Google to be able to process huge datasets within a cluster of CPU. Here is an example of a simple Map operation: QList images = ...; QImage processGrayscale(QImage& image); QFuture<void> future = QtConcurrent::mapped( images, processGrayscale); Instead of QtConcurrent::run(), we use the mapped function that takes a list and the function to apply to each element in a different thread each time. The images list is modified in place, so there is no need to declare QFuture with a template type. The operation can be made a blocking operation using QtConcurrent::blockingMapped() instead of QtConcurrent::mapped(). Finally, a MapReduce operation looks like this: QList images = ...; QImage processGrayscale(QImage& image); void combineImage(QImage& finalImage, const QImage& inputImage); QFuture<void> future = QtConcurrent::mappedReduced( images, processGrayscale, combineImage); Here, we added a combineImage() that will be called for each result returned by the map function, processGrayscale(). It will merge the intermediate data, inputImage, into the finalImage. This function is called only once at a time per thread, so there is no need to use a mutex object to lock the result variable. The FilterReduce reduce follows exactly the same pattern, the filter function simply allows to filter the input list instead of transforming it. Summary In this article, we discovered how a QThread works and you learned how to efficiently use tools provided by Qt to create a powerful multi-threaded application. Resources for Article: Further resources on this subject: QT Style Sheets [article] GUI Components in Qt 5 [article] DOM and QTP [article]
Read more
  • 0
  • 1
  • 42447

article-image-reactos-0-4-12-releases-with-kernel-improvements-intel-e1000-nic-driver-support-and-more
Bhagyashree R
25 Sep 2019
2 min read
Save for later

ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more

Bhagyashree R
25 Sep 2019
2 min read
Earlier this week, the ReactOS team announced the release of ReactOS 0.4.12. This release comes with a bunch of kernel improvements, Intel e1000 NIC driver support, font improvements, and more. Key updates in ReactOS 0.4.12 Kernel updates The filesystem infrastructure of ReactOS 0.4.12 has received quite a few improvements to enable Microsoft filesystem drivers. The team has also worked on the common cache module that has deep ties to the memory manager. The team has also improved device power management, fixed support for PXE booting, and overhauled the write-protection functionality. Window snapping ReactOS 0.4.12 comes with support for window snapping. So, users will now be able to align windows to sides or maximize and minimize them by dragging in specific directions. This release also comes with the keyboard shortcuts that accompany this feature. Intel e1000 NIC driver ReactOS 0.4.12 has a new driver to support the Network Interface Card (NIC) out of the box. Now, end-users do not need to manually find and install a driver. This new driver will also be compatible with e1000 NICs. Improvements related to font In ReactOS 0.4.12, font rendering is made more robust and correct. This release fixes a series of problems that badly affected text rendering for buttons in a range of applications,  from iTunes to various .NET applications. User-mode DLLs In this release, the team has made a range of improvements to user-mode components. The common controls (comctl) library is used by most of the Windows applications to draw various generic user interface elements. The team has fixed an issue related to it “reading extremely dryly.” Other updates include drivers for MIDI instruments and animated rotation bar in the startup/shutdown dialog. Check out the official announcement to know more about ReactOS 0.4.12 in detail. ReactOS 0.4.11 is now out with kernel improvements, manifests support, and more! Btrfs now boots ReactOS, a free and open-source alternative for Windows NT ReactOS version 0.4.9 released with Self-hosting and FastFAT crash fixes Understanding network port numbers, TCP, UDP, and ICMP on an operating system Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report  
Read more
  • 0
  • 0
  • 42233
article-image-5-blog-posts-that-could-make-you-a-better-python-programmer
Sam Wood
11 Feb 2019
2 min read
Save for later

5 blog posts that could make you a better Python programmer

Sam Wood
11 Feb 2019
2 min read
Python is one of the most important languages to master. It’s top rated, fast growing, and in demand by businesses around the globe. There’s a host of excellent insight across the web about how to become a better programmer with Python. Here’s five blogs we think you need to read to upgrade your skills and knowledge. 1. A Brief History of Python Did you know Python is actually older than Java, R and JavaScript? If you want to be a better Python programmer, it pays to know your history. This quick blog post takes you through the language's journey from Christmas hobby project to its modern ascendancy with version 3. 2. Do you write Python Code or Pythonic Code? Are you writing code in Python, or code for Python? When people talk about Pythonic code they mean that the code uses Python idioms well, that is natural or displays fluency in the language. Are you writing code like you would write Java or C++? This 4-minute blog post gives quick tips on how to make your code Pythonic. 3. The Singleton Python Design Pattern in Depth The singleton pattern is a powerful design pattern that allows you to create only one instance of data. You’d generally use it for things like the logging class and its subclasses, managing a connection to a database, or use read-only singletons to store some global states. This in-depth blog post takes you through the three principle ways to implement singletons, for better Python code. 4. Why is Python so good for artificial intelligence and machine learning? 5 Experts Explain. Python is the breakout language of data, zooming ahead of rival R to be dominant in the field of artificial intelligence and machine learning. But what is it about the programming language that makes it so well suited for this fast-growing field? In this blog post, five artificial intelligence experts all weigh in on what they think makes Python perfect for AI and machine learning. 5. Top 7 Python Programming Books You Need To Read That’s right - we put a list in our list. But if you really want to become a better Python programmer, you’ll want to get to grips with this stack of amazing Python books. Whether you’re a complete beginner or more experienced, these seven Python titles are the perfect way to upgrade your knowledge.
Read more
  • 0
  • 0
  • 42157

article-image-how-to-scaffold-a-new-module-in-odoo-11
Sugandha Lahoti
25 May 2018
2 min read
Save for later

How to Scaffold a New module in Odoo 11

Sugandha Lahoti
25 May 2018
2 min read
The latest version of Odoo ERP, Odoo 11, brings a plethora of features to Odoo targeting business application development. The market for Odoo is growing enormously and if you have thought about developing in Odoo, now is the best time to start. This hands-on video course, Odoo 11 Development Essentials, by Riste Kabranov, will help you get started with Odoo to build powerful applications. What is Scaffolding? With Scaffolding, you can automatically create a skeleton structure to simplify bootstrapping of new modules in Odoo. Since it’s an automatic process, you don’t need to spend efforts in setting up basic structures and look for starting requirements. Oddo has a scaffold command that creates the skeleton for a new module based on a template. By default, the new module is created in the current working directory, but we can provide a specific directory where to create the module, passing it as an additional parameter. A step-by-step guide to scaffold a new module in Odoo 11: Step 1 In the first step, you need to navigate to /opt/odoo/odoo and create a folder name custom_addons. Step 2 In the second step, you scaffold a new module into the custom_addons. For this, Locate odoo-bin Use ./odoo-bin scaffold module_name folder_name to scaffold a new empty module Check if the new module is there and consists all the files needed. Check out the video for a more detailed walkthrough! This video tutorial has been taken from Odoo 11 Development Essentials. To learn how to build and customize business applications with Odoo, buy the full video course. ERP tool in focus: Odoo 11 Building Your First Odoo Application Top 5 free Business Intelligence tools
Read more
  • 0
  • 0
  • 41854

article-image-getting-started-rstudio
Packt
16 Feb 2016
5 min read
Save for later

Getting Started with RStudio

Packt
16 Feb 2016
5 min read
The number of users adopting the R programming language has been increasing faster and faster in the last few years. The functions of the R console are limited when it comes to managing a lot of files, or when we want to work with version control systems. This is the reason, in combination with the increasing adoption rate, why a need for a better development environment arose. To serve this need, a team of R fans began to develop an integrated development environment (IDE) to make it easier to work on bigger projects and to collaborate with others. This IDE has the name, RStudio. In this article, we will see how to work with RStudio and projects (For more resources related to this topic, see here.) Working with RStudio and projects In the times before RStudio, it was very hard to manage bigger projects with R in the R console, as you had to create all the folder structures on your own. When you work with projects or open a project, RStudio will instantly take several actions. For example, it will start a new and clean R session, it will source the .Rprofile file in the project's main directory, and it will set the current working directory to the project directory. So, you have a complete working environment individually for every project. RStudio will even adjust its own settings, such as active tabs, splitter positions, and so on, to where they were when the project was closed. But just because you can create projects with RStudio easily, it does not mean that you should create a project for every single time that you write R code. For example, if you just want to do a small analysis, we would recommend that you create a project where you save all your smaller scripts. Creating a project with RStudio RStudio offers you an easy way to create projects. Just navigate to File | New Project and you will see a popup window as with the options shown in the following screenshot: These options let you decide from where you want to create your project. So, if you want to start it from scratch and create a new directory, associate your new project to an existing one, or if you want to create a project from a version control repository, you can avail of the respective options. For now, we will focus on creating a new directory. The following screenshot shows you the next options available: Locating your project A very important question you have to ask yourself when creating a new project is where you want to save it? There are several options and details you have to pay attention to especially when it comes to collaboration and different people working on the same project. You can save your project locally, on a cloud storage or with the help of a revision control system such as Git. Creating your first project To begin your first project, choose the New Directory option we described before and create an empty project. Then, choose a name for the directory and the location that you want to save it in. You should create a projects folder on your Dropbox. The first project will be a small data analysis based on a dataset that was extracted from the 1974 issue of the Motor Trend US magazine. It comprises fuel consumption and ten aspects of automobile design and performance, such as the weight or number of cylinders for 32 automobiles, and is included in the base R package. So, we do not have to install a separate package to work with this dataset, as it is automatically loaded when you start R: As you can see, we left the Use packrat with this project option unchecked. Packrat is a dependency management tool that makes your R code more isolated, portable, and reproducible by giving your project its own privately managed package library. This is especially important when you want to create projects in an organizational context where the code has to run on various computer systems, and has to be usable for a lot of different users. This first project will just run locally and will not focus on a specific combination of package versions. Organizing your folders RStudio creates an empty directory for you that includes just the file, Motor-Car-Trend-Analysis.Rproj. This file will store all the information on your project that RStudio will need for loading. But to stay organized, we have to create some folders in the directory. Create the following folders: data: This includes all the data that we need for our analysis code: This includes all the code files for cleaning up data, generating plots, and so on plots: This includes all graphical outputs reports: This comprises all the reports that we create from our dataset Saving the data The Motor Trend Car Road Tests dataset is part of the dataset package, which is one of the preinstalled packages in R. But, we will save the data in a CSV file in our data folder, after extracting the data from the mtcars variable, to make sure our analysis is reproducible. Put the following line of code in a new R script and save it as data.R in the code folder: #write data into csv file write.csv(mtcars, file = "data/cars.csv", row.names=FALSE) Analyzing the data The analysis script will first have to load the data from the CSV file with the following line: cars_data <- read.csv(file = "data/cars.csv", header = TRUE, sep = ",") Summary To learn more about RStudio, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mastering Machine Learning with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-machine-learning-r) R Data Analysis Cookbook (https://www.packtpub.com/big-data-and-business-intelligence/r-data-analysis-cookbook) Mastering Data Analysis with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-data-analysis-r) Resources for Article: Further resources on this subject: RefresheR [article] Deep learning in R [article] Aspects of Data Manipulation in R [article]
Read more
  • 0
  • 0
  • 41798
article-image-python-founder-guido-van-rossum-goes-on-a-permanent-vacation-from-being-bdfl
Savia Lobo
13 Jul 2018
5 min read
Save for later

Python founder resigns - Guido van Rossum goes ‘on a permanent vacation from being BDFL’

Savia Lobo
13 Jul 2018
5 min read
Python is one of the most popular scripting languages widely adopted and loved due to its simplicity.  Since its humble beginnings in the last 80s as an interpreter for the new, a simple-to-read scripting language, it has now come to dominate all of the tech world. Python has become a vital part of web development stacks such as Perl, PHP, and others have been core to domains like security. It is also used in current popular technologies such as AI, ML, and DL. After 28 years of successfully stewarding the Python community since inventing it back in Dec 1989, Guido van Rossum has decided to take himself out of the decision making process of the community as a Benevolent dictator for life (BDFL). Guido still promises to be a part of the core development group. He also added that he will be available to mentor people but most of the times the community will have to manage on their own. Benevolent dictator for life (BDFL) is a term that Guido's fellow Python enthusiasts came up with for him, as a joke, when discussing minutes of the meeting over email regarding leading Python’s development and adoption. Who will look after the Python community now? Guido Van Rossum said, "I am not going to appoint a successor". True to his leadership style, he has thrown his team of core developers into the deep end by asking them to consider what the Python community's new governance model could be. In his memo, he asked, "So what are you all going to do? Create a democracy? Anarchy? A dictatorship? A federation?" Guido's parting advice to the core dev team Guido expressed confidence in his team to continue to manage the day-to-day tasks and operations just as they’ve been doing under his leadership. The two things he wants the core developers and the community to think deeply about are: How the PEPs are decided and How will the new core developers be inducted? He also emphasized the importance of fostering the right community culture militantly through Python's Community Code of Conduct (CoC). He said, "if you don't like that document your only option might be to leave this group voluntarily. Perhaps there are issues to decide like when should someone be kicked out (this could be banning people from python-dev or python-ideas too since those are also covered by the CoC)." He assured the team that while he has stepped down as the BDFL and from all decision-making duties, he will continue to be an active member of the community and will now be more available as a mentor to those on the core development team. Guido's decision to quit seems to have stemmed partly from the physical, mental, and emotional toll that the role has taken on him over years. He concluded his thread on Transfer of Power by saying, "I'm tired, and need a very long break". How is the Python community taking this decision? The development team hopes Guido will make a come back after his well-deserved break. As a BDFL, Guido has provided them with consistency in design and taste. By having Guido as a monitor, the team has had a very consistent view of how the community should behave and this has been an asset for the whole team. Now they have four ways to explore to govern the Python community Find a new BDFL. This option seems highly unlikely as Guido’s legacy is irreplaceable. Besides, it is practically the least robust to rely on one person to take all key decisions and to commit their full time to the community. That person also needs to be well respected and accepted as a de facto head. Set up an N-virate leadership team (a group of 3 (triumvirate) or 5 (quintumvirate) experts). With such a model, the responsibilities and load will be equally distributed among the chosen members from the core development team. This appears to be the current favorite on the thread that opened yesterday. Become a democracy. In this model, the community gets to vote on all key decisions. This seems like the short-term fix the team is gravitating towards. At least to decide on the immediate task at hand. But many on the team acknowledge that this is not a permanent answer as it will pull the language in too many directions and also is time-consuming. Explore the governance model of other open source communities. This option is as being seriously considered in the discussions. Clearly, the community loves Guido, evident from the deluge of well wishes he's receiving from all over the globe. You know you've done your job well when you hear someone say 'You changed my life'. Guido has changed millions of lives for the better. https://twitter.com/AndrewYNg/status/1017664116482162689 https://twitter.com/anthonypjshaw/status/1017610576640393216 https://twitter.com/generativist/status/1017547690228396032 https://twitter.com/bloodyquantum/status/1017558674024218624 Thank you, Guido, for Python, your heart, and your leadership. We know the community will thrive even in your absence because you've cultivated an excellent culture and a great set of minds. Top 7 Python programming books you need to read Python web development: Django vs Flask in 2018 Python experts talk Python on Twitter: Q&A Recap
Read more
  • 0
  • 0
  • 41520

article-image-building-your-first-odoo-application
Packt
02 Jan 2017
22 min read
Save for later

Building Your First Odoo Application

Packt
02 Jan 2017
22 min read
In this article by, Daniel Reis, the author of the book Odoo 10 Development Essentials, we will create our first Odoo application and learn the steps needed make it available to Odoo and install it. (For more resources related to this topic, see here.) Inspired by the notable http://todomvc.com/ project, we will build a simple To-Do application. It should allow us to add new tasks, mark them as completed, and finally clear the task list from all the already completed tasks. Understanding applications and modules It's common to hear about Odoo modules and applications. But what exactly is the difference between them? Module add-ons are building blocks for Odoo applications. A module can add new features to Odoo, or modify existing ones. It is a directory containing a manifest, or descriptor file, named __manifest__.py, plus the remaining files that implement its features. Applications are the way major features are added to Odoo. They provide the core elements for a functional area, such as Accounting or HR, based on which additional add-on modules modify or extend features. Because of this, they are highlighted in the Odoo Apps menu. If your module is complex, and adds new or major functionality to Odoo, you might consider creating it as an application. If you module just makes changes to existing functionality in Odoo, it is likely not an application. Whether a module is an application or not is defined in the manifest. Technically is does not have any particular effect on how the add-on module behaves. It is only used for highlight on the Apps list. Creating the module basic skeleton We should have the Odoo server at ~/odoo-dev/odoo/. To keep things tidy, we will create a new directory alongside it to host our custom modules, at ~/odoo-dev/custom-addons. Odoo includes a scaffold command to automatically create a new module directory, with a basic structure already in place. You can learn more about it with: $ ~/odoo-dev/odoo/odoo-bin scaffold --help You might want to keep this in mind when you start working your next module, but we won't be using it right now, since we will prefer to manually create all the structure for our module. An Odoo add-on module is a directory containing a __manifest__.py descriptor file. In previous versions, this descriptor file was named __openerp__.py. This name is still supported, but is deprecated. It also needs to be Python-importable, so it must also have an __init__.py file. The module's directory name is its technical name. We will use todo_app for it. The technical name must be a valid Python identifier: it should begin with a letter and can only contain letters, numbers, and the underscore character. The following commands create the module directory and create an empty __init__.py file in it, ~/odoo-dev/custom-addons/todo_app/__init__.py. In case you would like to do that directly from the command line, this is what you would use: $ mkdir ~/odoo-dev/custom-addons/todo_app $ touch ~/odoo-dev/custom-addons/todo_app/__init__.py Next, we need to create the descriptor file. It should contain only a Python dictionary with about a dozen possible attributes; of this, only the name attribute is required. A longer description attribute and the author attribute also have some visibility and are advised. We should now add a __manifest__.py file alongside the __init__.py file with the following content: { 'name': 'To-Do Application', 'description': 'Manage your personal To-Do tasks.', 'author': 'Daniel Reis', 'depends': ['base'], 'application': True, } The depends attribute can have a list of other modules that are required. Odoo will have them automatically installed when this module is installed. It's not a mandatory attribute, but it's advised to always have it. If no particular dependencies are needed, we should depend on the core base module. You should be careful to ensure all dependencies are explicitly set here; otherwise, the module may fail to install in a clean database (due to missing dependencies) or have loading errors, if by chance the other required modules are loaded afterwards. For our application, we don't need any specific dependencies, so we depend on the base module only. To be concise, we chose to use very few descriptor keys, but in a real word scenario, we recommend that you also use the additional keys since they are relevant for the Odoo apps store: summary: This is displayed as a subtitle for the module. version: By default, is 1.0. It should follow semantic versioning rules (see http://semver.org/ for details). license: By default, is LGPL-3. website: This is a URL to find more information about the module. This can help people find more documentation or the issue tracker to file bugs and suggestions. category: This is the functional category of the module, which defaults to Uncategorized. The list of existing categories can be found in the security groups form (Settings | User | Groups), in the Application field drop-down list. These other descriptor keys are also available: installable: It is by default True but can be set to False to disable a module. auto_install: If the auto_install module is set to True, this module will be automatically installed, provided all its dependencies are already installed. It is used for the Glue modules. Since Odoo 8.0, instead of the description key, we can use a README.rst or README.md file in the module's top directory. A word about licenses Choosing a license for your work is very important, and you should consider carefully what is the best choice for you, and its implications. The most used licenses for Odoo modules are the GNU Lesser General Public License (LGLP) and the Affero General Public License (AGPL). The LGPL is more permissive and allows commercial derivate work, without the need to share the corresponding source code. The AGPL is a stronger open source license, and requires derivate work and service hosting to share their source code. Learn more about the GNU licenses at https://www.gnu.org/licenses/. Adding to the add-ons path Now that we have a minimalistic new module, we want to make it available to the Odoo instance. For that, we need to make sure the directory containing the module is in the add-ons path, and then update the Odoo module list. We will position in our work directory and start the server with the appropriate add-ons path configuration: $ cd ~/odoo-dev $ ./odoo/odoo-bin -d todo --addons-path="custom-addons,odoo/addons" --save The --save option saves the options you used in a config file. This spares us from repeating them every time we restart the server: just run ./odoo-bin and the last saved options will be used. Look closely at the server log. It should have an INFO ? odoo: addons paths:[...] line. It should include our custom-addons directory. Remember to also include any other add-ons directories you might be using. For instance, if you also have a ~/odoo-dev/extra directory containing additional modules to be used, you might want to include them also using the option: --addons-path="custom-addons,extra,odoo/addons" Now we need the Odoo instance to acknowledge the new module we just added. Installing the new module In the Apps top menu, select the Update Apps List option. This will update the module list, adding any modules that may have been added since the last update to the list. Remember that we need the developer mode enabled for this option to be visible. That is done in the Settings dashboard, in the link at the bottom right, below the Odoo version number information . Make sure your web client session is working with the right database. You can check that at the top right: the database name is shown in parenthesis, right after the user name. A way to enforce using the correct database is to start the server instance with the additional option --db-filter=^MYDB$. The Apps option shows us the list of available modules. By default it shows only application modules. Since we created an application module we don't need to remove that filter to see it. Type todo in the search and you should see our new module, ready to be installed. Now click on the module's Install button and we're ready! The Model layer Now that Odoo knows about our new module, let's start by adding a simple model to it. Models describe business objects, such as an opportunity, sales order, or partner (customer, supplier, and so on.). A model has a list of attributes and can also define its specific business. Models are implemented using a Python class derived from an Odoo template class. They translate directly to database objects, and Odoo automatically takes care of this when installing or upgrading the module. The mechanism responsible for this is Object Relational Model (ORM). Our module will be a very simple application to keep to-do tasks. These tasks will have a single text field for the description and a checkbox to mark them as complete. We should later add a button to clean the to-do list from the old completed tasks. Creating the data model The Odoo development guidelines state that the Python files for models should be placed inside a models subdirectory. For simplicity, we won't be following this here, so let's create a todo_model.py file in the main directory of the todo_app module. Add the following content to it: # -*- coding: utf-8 -*- from odoo import models, fields class TodoTask(models.Model): _name = 'todo.task' _description = 'To-do Task' name = fields.Char('Description', required=True) is_done = fields.Boolean('Done?') active = fields.Boolean('Active?', default=True) The first line is a special marker telling the Python interpreter that this file has UTF-8 so that it can expect and handle non-ASCII characters. We won't be using any, but it's a good practice to have it anyway. The second line is a Python import statement, making available the models and fields objects from the Odoo core. The third line declares our new model. It's a class derived from models.Model. The next line sets the _name attribute defining the identifier that will be used throughout Odoo to refer to this model. Note that the actual Python class name , TodoTask in this case, is meaningless to other Odoo modules. The _name value is what will be used as an identifier. Notice that this and the following lines are indented. If you're not familiar with Python, you should know that this is important: indentation defines a nested code block, so these four lines should all be equally indented. Then we have the _description model attribute. It is not mandatory, but it provides a user friendly name for the model records, that can be used for better user messages. The last three lines define the model's fields. It's worth noting that name and active are special field names. By default, Odoo will use the name field as the record's title when referencing it from other models. The active field is used to inactivate records, and by default, only active records will be shown. We will use it to clear away completed tasks without actually deleting them from the database. Right now, this file is not yet used by the module. We must tell Python to load it with the module in the __init__.py file. Let's edit it to add the following line: from . import todo_model That's it. For our Python code changes to take effect the server instance needs to be restarted (unless it was using the --dev mode). We won't see any menu option to access this new model, since we didn't add them yet. Still we can inspect the newly created model using the Technical menu. In the Settings top menu, go to Technical | Database Structure | Models, search for the todo.task model on the list and then click on it to see its definition: If everything goes right, it is confirmed that the model and fields were created. If you can't see them here, try a server restart with a module upgrade, as described before. We can also see some additional fields we didn't declare. These are reserved fields Odoo automatically adds to every new model. They are as follows: id: A unique, numeric identifier for each record in the model. create_date and create_uid: These specify when the record was created and who created it, respectively. write_date and write_uid: These confirm when the record was last modified and who modified it, respectively. __last_update: This is a helper that is not actually stored in the database. It is used for concurrency checks. The View layer The View layer describes the user interface. Views are defined using XML, which is used by the web client framework to generate data-aware HTML views. We have menu items that can activate the actions that can render views. For example, the Users menu item processes an action also called Users, that in turn renders a series of views. There are several view types available, such as the list and form views, and the filter options made available are also defined by particular type of view, the search view. The Odoo development guidelines state that the XML files defining the user interface should be placed inside a views/ subdirectory. Let's start creating the user interface for our To-Do application. Adding menu items Now that we have a model to store our data, we should make it available on the user interface. For that we should add a menu option to open the To-do Task model so that it can be used. Create the views/todo_menu.xml file to define a menu item and the action performed by it: <?xml version="1.0"?> <odoo> <!-- Action to open To-do Task list --> <act_window id="action_todo_task" name="To-do Task" res_model="todo.task" view_mode="tree,form" /> <!-- Menu item to open To-do Task list --> <menuitem id="menu_todo_task" name="Todos" action="action_todo_task" /> </odoo> The user interface, including menu options and actions, is stored in database tables. The XML file is a data file used to load those definitions into the database when the module is installed or upgraded. The preceding code is an Odoo data file, describing two records to add to Odoo: The <act_window> element defines a client-side window action that will open the todo.task model with the tree and form views enabled, in that order The <menuitem> defines a top menu item calling the action_todo_task action, which was defined before Both elements include an id attribute. This id , also called an XML ID, is very important: it is used to uniquely identify each data element inside the module, and can be used by other elements to reference it. In this case, the <menuitem> element needs to reference the action to process, and needs to make use of the <act_window> id for that. Our module does not know yet about the new XML data file. This is done by adding it to the data attribute in the __manifest__.py file. It holds the list of files to be loaded by the module. Add this attribute to the descriptor's dictionary: 'data': ['views/todo_menu.xml'], Now we need to upgrade the module again for these changes to take effect. Go to the Todos top menu and you should see our new menu option available: Even though we haven't defined our user interface view, clicking on the Todos menu will open an automatically generated form for our model, allowing us to add and edit records. Odoo is nice enough to automatically generate them so that we can start working with our model right away. Odoo supports several types of views, but the three most important ones are: tree (usually called list views), form, and search views. We'll add an example of each to our module. Creating the form view All views are stored in the database, in the ir.ui.view model. To add a view to a module, we declare a <record> element describing the view in an XML file, which is to be loaded into the database when the module is installed. Add this new views/todo_view.xml file to define our form view: <?xml version="1.0"?> <odoo> <record id="view_form_todo_task" model="ir.ui.view"> <field name="name">To-do Task Form</field> <field name="model">todo.task</field> <field name="arch" type="xml"> <form string="To-do Task"> <group> <field name="name"/> <field name="is_done"/> <field name="active" readonly="1"/> </group> </form> </field> </record> </odoo> Remember to add this new file to the data key in manifest file, otherwise our module won't know about it and it won't be loaded. This will add a record to the ir.ui.view model with the identifier view_form_todo_task. The view is for the todo.task model and is named To-do Task Form. The name is just for information; it does not have to be unique, but it should allow one to easily identify which record it refers to. In fact the name can be entirely omitted, in that case it will be automatically generated from the model name and the view type. The most important attribute is arch, and contains the view definition, highlighted in the XML code above. The <form> tag defines the view type, and in this case contains three fields. We also added an attribute to the active field to make it read-only. Adding action buttons Forms can have buttons to perform actions. These buttons are able to trigger workflow actions, run window actions—such as opening another form, or run Python functions defined in the model. They can be placed anywhere inside a form, but for document-style forms, the recommended place for them is the <header> section. For our application, we will add two buttons to run the methods of the todo.task model: <header> <button name="do_toggle_done" type="object" string="Toggle Done" class="oe_highlight" /> <button name="do_clear_done" type="object" string="Clear All Done" /> </header> The basic attributes of a button comprise the following: The string attribute that has the text to be displayed on the button The type attribute referring to the action it performs The name attribute referring to the identifier for that action The class attribute, which is an optional attribute to apply CSS styles, like in regular HTML The complete form view At this point, our todo.task form view should look like this: <form> <header> <button name="do_toggle_done" type="object" string="Toggle Done" class="oe_highlight" /> <button name="do_clear_done" type="object" string="Clear All Done" /> </header> <sheet> <group name="group_top"> <group name="group_left"> <field name="name"/> </group> <group name="group_right"> <field name="is_done"/> <field name="active" readonly="1" /> </group> </group> </sheet> </form> Remember that for the changes to be loaded to our Odoo database, a module upgrade is needed. To see the changes in the web client, the form needs to be reloaded: either click again on the menu option that opens it or reload the browser page (F5 in most browsers). The action buttons won't work yet, since we still need to add their business logic. The business logic layer Now we will add some logic to our buttons. This is done with Python code, using the methods in the model's Python class. Adding business logic We should edit the todo_model.py Python file to add to the class the methods called by the buttons. First we need to import the new API, so add it to the import statement at the top of the Python file: from odoo import models, fields, api The action of the Toggle Done button will be very simple: just toggle the Is Done? flag. For logic on records, use the @api.multi decorator. Here, self will represent a recordset, and we should then loop through each record. Inside the TodoTask class, add this: @api.multi def do_toggle_done(self): for task in self: task.is_done = not task.is_done return True The code loops through all the to-do task records, and for each one, modifies the is_done field, inverting its value. The method does not need to return anything, but we should have it to at least return a True value. The reason is that clients can use XML-RPC to call these methods, and this protocol does not support server functions returning just a None value. For the Clear All Done button, we want to go a little further. It should look for all active records that are done and make them inactive. Usually, form buttons are expected to act only on the selected record, but in this case, we will want it also act on records other than the current one: @api.model def do_clear_done(self): dones = self.search([('is_done', '=', True)]) dones.write({'active': False}) return True On methods decorated with @api.model, the self variable represents the model with no record in particular. We will build a dones recordset containing all the tasks that are marked as done. Then, we set on the active flag to False on them. The search method is an API method that returns the records that meet some conditions. These conditions are written in a domain, which is a list of triplets. The write method sets the values at once on all the elements of a recordset. The values to write are described using a dictionary. Using write here is more efficient than iterating through the recordset to assign the value to each of them one by one. Set up access security You might have noticed that upon loading, our module is getting a warning message in the server log: The model todo.task has no access rules, consider adding one. The message is pretty clear: our new model has no access rules, so it can't be used by anyone other than the admin super user. As a super user, the admin ignores data access rules, and that's why we were able to use the form without errors. But we must fix this before other users can use our model. Another issue yet to address is that we want the to-do tasks to be private to each user. Odoo supports row-level access rules, which we will use to implement that. Adding access control security To get a picture of what information is needed to add access rules to a model, use the web client and go to Settings | Technical | Security | Access Controls List: Here we can see the ACL for some models. It indicates, per security group, what actions are allowed on records. This information has to be provided by the module using a data file to load the lines into the ir.model.access model. We will add full access to the Employee group on the model. Employee is the basic access group nearly everyone belongs to. This is done using a CSV file named security/ir.model.access.csv. Let's add it with the following content: id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink acess_todo_task_group_user,todo.task.user,model_todo_task,base.group_user,1,1,1,1 The filename corresponds to the model to load the data into, and the first line of the file has the column names. These are the columns provided by the CSV file: id: It is the record external identifier (also known as XML ID). It should be unique in our module. name: This is a description title. It is only informative and it's best if it's kept unique. Official modules usually use a dot-separated string with the model name and the group. Following this convention, we used todo.task.user. model_id: This is the external identifier for the model we are giving access to. Models have XML IDs automatically generated by the ORM: for todo.task, the identifier is model_todo_task. group_id: This identifies the security group to give permissions to. The most important ones are provided by the base module. The Employee group is such a case and has the identifier base.group_user. The last four perm fields flag the access to grant read, write, create, or unlink (delete) access. We must not forget to add the reference to this new file in the __manifest__.py descriptor's data attribute It should look like this: 'data': [ 'security/ir.model.access.csv', 'views/todo_view.xml', 'views/todo_menu.xml', ], As before, upgrade the module for these additions to take effect. The warning message should be gone, and we can confirm that the permissions are OK by logging in with the user demo (password is also demo). If we run our tests now it they should only fail the test_record_rule test case. Summary We created a new module from the start, covering the most frequently used elements in a module: models, the three basic types of views (form, list, and search), business logic in model methods, and access security. Always remember, when adding model fields, an upgrade is needed. When changing Python code, including the manifest file, a restart is needed. When changing XML or CSV files, an upgrade is needed; also, when in doubt, do both: restart the server and upgrade the modules. Resources for Article: Further resources on this subject: Getting Started with Odoo Development [Article] Introduction to Odoo [Article] Web Server Development [Article]
Read more
  • 0
  • 0
  • 41462
Modal Close icon
Modal Close icon