Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-9-useful-r-packages-for-nlp-text-mining
Amey Varangaonkar
18 Dec 2017
6 min read
Save for later

9 Useful R Packages for NLP & Text Mining

Amey Varangaonkar
18 Dec 2017
6 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Mastering Text Mining with R, co-authored by Ashish Kumar and Avinash Paul. This book lists various techniques to extract useful and high-quality information from your textual data.[/box] There is a wide range of packages available in R for natural language processing and text mining. In the article below, we present some of the popular and widely used R packages for NLP: OpenNLP OpenNLP is an R package which provides an interface, Apache OpenNLP, which is a  machine-learning-based toolkit written in Java for natural language processing activities. Apache OpenNLP is widely used for most common tasks in NLP, such as tokenization, POS tagging, named entity recognition (NER), chunking, parsing, and so on. It provides wrappers for Maxent entropy models using the Maxent Java package. It provides functions for sentence annotation, word annotation, POS tag annotation, and annotation parsing using the Apache OpenNLP chunking parser. The Maxent Chunk annotator function computes the chunk annotation using the Maxent chunker provided by OpenNLP. The Maxent entity annotator function in R package utilizes the Apache OpenNLP Maxent name finder for entity annotation. Model files can be downloaded from http://opennlp.sourceforge.net/models-1.5/. These language models can be effectively used in R packages by installing the OpenNLPmodels.language package from the repository at http://datacube.wu.ac.at. Get the OpenNLP package here. Rweka The RWeka package in R provides an interface to Weka. Weka is an open source software developed by a machine learning group at the University of Wakaito, which provides a wide range of machine learning algorithms which can either be directly applied to a dataset or it can be called from a Java code. Different data-mining activities, such as data processing, supervised and unsupervised learning, association mining, and so on, can be performed using the RWeka package. For natural language processing, RWeka provides tokenization and stemming functions. RWeka packages provide an interface to Alphabetic, NGramTokenizers, and wordTokenizer functions, which can efficiently perform tokenization for contiguous alphabetic sequence, string-split to n-grams, or simple word tokenization, respectively. Get started with Rweka here. RcmdrPlugin.temis The RcmdrPlugin.temis package in R provides a graphical integrated text-mining solution. This package can be leveraged for many text-mining tasks, such as importing and cleaning a corpus, terms and documents count, term co-occurrences, correspondence analysis, and so on. Corpora can be imported from different sources and analysed using the importCorpusDlg function. The package provides flexible data source options to import corpora from different sources, such as text files, spreadsheet files, XML, HTML files, Alceste format and Twitter search. The Import function in this package processes the corpus and generates a term-document matrix. The package provides different functions to summarize and visualize the corpus statistics. Correspondence analysis and hierarchical clustering can be performed on the corpus. The corpusDissimilarity function helps analyse and create a crossdissimilarity table between term-documents present in the corpus. This package provides many functions to help the users explore the corpus. For example, frequentTerms to list the most frequent terms of a corpus, specificTerms to list terms most associated with each document, subsetCorpusByTermsDlg to create a subset of the corpus. Term frequency, term co-occurrence, term dictionary, temporal evolution of occurrences or term time series, term metadata variables, and corpus temporal evolution are among the other very useful functions available in this package for text mining. Download the package from CRAN page. tm The tm package is a text-mining framework which provides some powerful functions which will aid in text-processing steps. It has methods for importing data, handling corpus, metadata management, creation of term document matrices, and preprocessing methods. For managing documents using the tm package, we create a corpus which is a collection of text documents. There are two types of implementation, volatile corpus (VCorpus) and permanent corpus (PCropus). VCorpus is completely held in memory and when the R object is destroyed the corpus is gone. PCropus is stored in the filesystem and is present even after the R object is destroyed; this corpus can be created by using the VCorpus and PCorpus functions respectively. This package provides a few predefined sources which can be used to import text, such as DirSource, VectorSource, or DataframeSource. The getSources method lists available sources, and users can create their own sources. The tm package ships with several reader options: readPlain, readPDF, and readDOC. We can execute the getReaders method for an up-to-date list of available readers. To write a corpus to the filesystem, we can use writeCorpus. For inspecting a corpus, there are methods such as inspect and print. For transformation of text, such as stop-word removal, stemming, whitespace removal, and so on, we can use the tm_map, content_transformer, tolower, stopwords("english") functions. For metadata management, meta comes in handy. The tm package provides various quantitative function for text analysis, such as DocumentTermMatrix , findFreqTerms, findAssocs, and removeSparseTerms. Download the tm package here. languageR languageR provides data sets and functions for statistical analysis on text data. This package contains functions for vocabulary richness, vocabulary growth, frequency spectrum, also mixed-effects models and so on. There are simulation functions available: simple regression, quasi-F factor, and Latin-square designs. Apart from that, this package can also be used for correlation, collinearity diagnostic, diagnostic visualization of logistic models, and so on. koRpus The koRpus package is a versatile tool for text mining which implements many functions for text readability and lexical variation. Apart from that, it can also be used for basic level functions such as tokenization and POS tagging. You can find more information about its current version and dependencies here. RKEA The RKEA package provides an interface to KEA, which is a tool for keyword extraction from texts. RKEA requires a keyword extraction model, which can be created by manually indexing a small set of texts, using which it extracts keywords from the document. maxent The maxent package in R provides tools for low-memory implementation of multinomial logistic regression, which is also called the maximum entropy model. This package is quite helpful for classification processes involving sparse term-document matrices, and low memory consumption on huge datasets. Download and get started with maxent. lsa Truncated singular vector decomposition can help overcome the variability in a term-document matrix by deriving the latent features statistically. The lsa package in R provides an implementation of latent semantic analysis. The ease of use and efficiency of R packages can be very handy when carrying out even the trickiest of text mining task. As a result, they have grown to become very popular in the community. If you found this post useful, you should definitely refer to our book Mastering Text Mining with R. It will give you ample techniques for effective text mining and analytics using the above mentioned packages.
Read more
  • 0
  • 1
  • 42617

article-image-opendaylight-fundamentals
Packt
05 Jul 2017
14 min read
Save for later

OpenDaylight Fundamentals

Packt
05 Jul 2017
14 min read
In this article by Jamie Goodyear, Mathieu Lemay, Rashmi Pujar, Yrineu Rodrigues, Mohamed El-Serngawy, and Alexis de Talhouët the authors of the book OpenDaylight Cookbook, we will be covering the following recipes: Connecting OpenFlow switches Mounting a NETCONF device Browsing data models with Yang UI (For more resources related to this topic, see here.) OpenDaylight is a collaborative platform supported by leaders in the networking industry and hosted by the Linux Foundation. The goal of the platform is to enable the adoption of software-defined networking (SDN) and create a solid base for network functions virtualization (NFV). Connecting OpenFlow switches OpenFlow is a vendor-neutral standard communications interface defined to enable the interaction between the control and forwarding channels of an SDN architecture. The OpenFlow plugin project intends to support implementations of the OpenFlow specification as it evolves. It currently supports OpenFlow versions 1.0 and 1.3.2. In addition, to support the core OpenFlow specification, OpenDaylight Beryllium also includes preliminary support for the Table Type Patterns and OF-CONFIG specifications. The OpenFlow southbound plugin currently provides the following components: Flow management Group management Meter management Statistics polling Let's connect an OpenFlow switch to OpenDaylight. Getting ready This recipe requires an OpenFlow switch. If you don't have any, you can use a mininet-vm with OvS installed. You can download mininet-vm from the website: https://github.com/mininet/mininet/wiki/Mininet-VM-Images. Any version should work The following recipe will be presented using a mininet-vm with OvS 2.0.2. How to do it... Start the OpenDaylight distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an OpenFlow switch: opendaylight-user@root>feature:install odl-openflowplugin-all It might take a minute or so to complete the installation. Connect an OpenFlow switch to OpenDaylight.we will use mininet-vm as our OpenFlow switch as this VM runs an instance of OpenVSwitch:     Login to mininet-vm using:  Username: mininet   Password: mininet    Let's create a bridge: mininet@mininet-vm:~$ sudo ovs-vsctl add-br br0 Now let's connect OpenDaylight as the controller of br0: mininet@mininet-vm:~$ sudo ovs-vsctl set-controller br0 tcp: ${CONTROLLER_IP}:6633    Let's look at our topology: mininet@mininet-vm:~$ sudo ovs-vsctl show 0b8ed0aa-67ac-4405-af13-70249a7e8a96 Bridge "br0" Controller "tcp: ${CONTROLLER_IP}:6633" is_connected: true Port "br0" Interface "br0" type: internal ovs_version: "2.0.2" ${CONTROLLER_IP} is the IP address of the host running OpenDaylight. We're establishing a TCP connection. Have a look at the created OpenFlow node.Once the OpenFlow switch is connected, send the following request to get information regarding the switch:    Type: GET    Headers:Authorization: Basic YWRtaW46YWRtaW4=    URL: http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/ This will list all the nodes under opendaylight-inventory subtree of MD-SAL that store OpenFlow switch information. As we connected our first switch, we should have only one node there. It will contain all the information the OpenFlow switch has, including its tables, its ports, flow statistics, and so on. How it works... Once the feature is installed, OpenDaylight is listening to connection on port 6633 and 6640. Setting up the controller on the OpenFlow-capable switch will immediately trigger a callback on OpenDaylight. It will create the communication pipeline between the switch and OpenDaylight so they can communicate in a scalable and non-blocking way. Mounting a NETCONF device The OpenDaylight component responsible to connect remote NETCONF devices is called the NETCONF southbound plugin aka the netconf-connector. Creating an instance of the netconf-connector will connect a NETCONF device. The NETCONF device will be seen as a mount point in the MD-SAL, exposing the device configuration and operational datastore and its capabilities. These mount points allow applications and remote users (over RESTCONF) to interact with the mounted devices. The netconf-connector currently supports the RFC-6241, RFC-5277 and RFC-6022. The following recipe will explain how to connect a NETCONF device to OpenDaylight. Getting ready This recipe requires a NETCONF device. If you don't have any, you can use the NETCONF test tool provided by OpenDaylight. It can be downloaded from the OpenDaylight Nexus repository: https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/netconf/netconf-testtool/1.0.4-Beryllium-SR4/netconf-testtool-1.0.4-Beryllium-SR4-executable.jar How to do it... Start OpenDaylight karaf distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an NETCONF device: opendaylight-user@root>feature:install odl-netconf-topology odl-restconf It might take a minute or so to complete the installation. Start your NETCONF device.If you want to use the NETCONF test tool, it is time to simulate a NETCONF device using the following command: $ java -jar netconf-testtool-1.0.1-Beryllium-SR4-executable.jar --device-count 1 This will simulate one device that will be bound to port 17830. Configure a new netconf-connectorSend the following request using RESTCONF:    Type: PUT    URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-deviceBy looking closer at the URL, you will notice that the last part is new-netconf-device. This must match the node-id that we will define in the payload. Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= Payload: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> </node> Let's have a closer look at this payload:    node-id: Defines the name of the netconf-connector.    address: Defines the IP address of the NETCONF device.    port: Defines the port for the NETCONF session.    username: Defines the username of the NETCONF session. This should be provided by the NETCONF device configuration.    password: Defines the password of the NETCONF session. As for the username, this should be provided by the NETCONF device configuration.    tcp-only: Defines whether or not the NETCONF session should use tcp or ssl. If set to true it will use tcp. This is the default configuration of the netconf-connector; it actually has more configurable elements that will be present in a second part. Once you have completed the request, send it. This will spawn a new netconf-connector that connects to the NETCONF device at the provided IP address and port using the provided credentials. Verify that the netconf-connector has correctly been pushed and get information about the connected NETCONF device.First, you could look at the log to see if any error occurred. If no error has occurred, you will see: 2016-05-07 11:37:42,470 | INFO | sing-executor-11 | NetconfDevice | 253 - org.opendaylight.netconf.sal-netconf-connector - 1.3.0.Beryllium | RemoteDevice{new-netconf-device}: Netconf connector initialized successfully Once the new netconf-connector is created, some useful metadata are written into the MD-SAL's operational datastore under the network-topology subtree. To retrieve this information, you should send the following request: Type: GET Headers: Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device We're using new-netconf-device as the node-id because this is the name we assigned to the netconf-connector in a previous step. This request will provide information about the connection status and device capabilities. The device capabilities are all the yang models the NETCONF device is providing in its hello-message that was used to create the schema context. More configuration for the netconf-connectorAs mentioned previously, the netconf-connector contains various configuration elements. Those fields are non-mandatory, with default values. If you do not wish to override any of these values, you shouldn't provide them. schema-cache-directory: This corresponds to the destination schema repository for yang files downloaded from the NETCONF device. By default, those schemas are saved in the cache directory ($ODL_ROOT/cache/schema). Using this configuration will define where to save the downloaded schema related to the cache directory. For instance, if you assigned new-schema-cache, schemas related to this device would be located under $ODL_ROOT/cache/new-schema-cache/. reconnect-on-changed-schema: If set to true, the connector will auto disconnect/reconnect when schemas are changed in the remote device. The netconf-connector will subscribe to base NETCONF notifications and listens for netconf-capability-change notification. Default value is false. connection-timeout-millis: Timeout in milliseconds after which the connection must be established. Default value is 20000 milliseconds.  default-request-timeout-millis: Timeout for blocking operations within transactions. Once this timer is reached, if the request is not yet finished, it will be canceled. Default value is 60000 milliseconds.  max-connection-attempts: Maximum number of connection attempts. Non-positive or null value is interpreted as infinity. Default value is 0, which means it will retry forever.  between-attempts-timeout-millis: Initial timeout in milliseconds between connection attempts. This will be multiplied by the sleep-factor for every new attempt. Default value is 2000 milliseconds.  sleep-factor: Back-off factor used to increase the delay between connection attempt(s). Default value is 1.5.  keepalive-delay: Netconf-connector sends keep alive RPCs while the session is idle to ensure session connectivity. This delay specifies the timeout between keep alive RPC in seconds. Providing a 0 value will disable this mechanism. Default value is 120 seconds. Using this configuration, your payload would look like this: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> <schema-cache-directory >new_netconf_device_cache</schema-cache-directory> <reconnect-on-changed-schema >false</reconnect-on-changed-schema> <connection-timeout-millis >20000</connection-timeout-millis> <default-request-timeout-millis >60000</default-request-timeout-millis> <max-connection-attempts >0</max-connection-attempts> <between-attempts-timeout-millis >2000</between-attempts-timeout-millis> <sleep-factor >1.5</sleep-factor> <keepalive-delay >120</keepalive-delay> </node> How it works... Once the request to connect a new NETCONF device is sent, OpenDaylight will setup the communication channel, used for managing, interacting with the device. At first, the remote NETCONF device will send its hello-message defining all of the capabilities it has. Based on this, the netconf-connector will download all the YANG files provided by the device. All those YANG files will define the schema context of the device. At the end of the process, some exposed capabilities might end up as unavailable, for two possible reasons: The NETCONF device provided a capability in its hello-message but hasn't provided the schema. ODL failed to mount a given schema due to YANG violation(s). OpenDaylight parses YANG models as per as the RFC 6020; if a schema is not respecting the RFC, it could end up as an unavailable-capability. If you encounter one of these situations, looking at the logs will pinpoint the reason for such a failure. There's more... Once the NETCONF device is connected, all its capabilities are available through the mount point. View it as a pass-through directly to the NETCONF device. Get datastore To see the data contained in the device datastore, use the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/ Adding yang-ext:mount/ to the URL will access the mount point created for new-netconf-device. This will show the configuration datastore. If you want to see the operational one, replace config by operational in the URL. If your device defines yang model, you can access its data using the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<container> The <module> represents a schema defining the <container>. The <container> can either be a list or a container. It is not possible to access a single leaf. You can access containers/lists within containers/lists. The last part of the URL would look like this: …/ yang-ext:mount/<module>:<container>/<sub-container> Invoke RPC In order to invoke an RPC on the remote device, you should use the following request: Type: POST Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<operation> This URL is accessing the mount point of new-netconf-device, and through this mount point we're accessing the <module> to call its <operation>. The <module> represents a schema defining the RPC and <operation> represents the RPC to call. Delete a netconf-connector Removing a netconf-connector will drop the NETCONF session and all resources will be cleaned. To perform such an operation, use the following request: Type: DELETE Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device By looking closer to the URL, you can see that we are removing the NETCONF node-id new-netconf-device. Browsing data models with Yang UI Yang UI is a user interface application through which one can navigate among all yang models available in the OpenDaylight controller. Not only does it aggregate all data models, it also enables their usage. Using this interface, you can create, remove, update, and delete any part of the model-driven datastore. It provides a nice, smooth user interface making it easier to browse through the model(s). This recipe will guide you through those functionalities. Getting ready This recipe only requires the OpenDaylight controller and a web-browser. How to do it... Start your OpenDaylight distribution using the karaf script. Using this client will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible to pull in all dependencies needed to use Yang UI: opendaylight-user@root>feature:install odl-dlux-yangui It might take a minute or so to complete the installation. Navigate to http://localhost:8181/index.html#/yangui/index.Username: admin Password: admin Once logged in, all modules will be loading until you can see this message at the bottom of the screen: Loading completed successfully You should see the API tab listing all yang models in the following format: <module-name> rev.<revision-date> For instance:     cluster-admin rev.2015-10-13     config rev.2013-04-05     credential-store rev.2015-02-26 By default, there isn't much you can do with the provided yang models. So, let's connect an OpenFlow switch to better understand how to use this Yang UI. Once done, refresh your web page to load newly added modules. Look for opendaylight-inventory rev.2013-08-19 and select the operational tab, as nothing will yet be in the config datastore. Then click on nodes and you'll see a request bar at the bottom of the page with multiple options.You can either copy the request to the clipboard to use it on your browser, send it, show a preview of it, or define a custom API request. For now, we will only send the request. You should see Request sent successfully and under this message should be the retrieved data. As we only have one switch connected, there is only one node. All the switch operational information is now printed on your screen. You could do the same request by specifying the node-id in the request. To do that you will need to expand nodes and click on node {id}, which will enable a more fine-grained search. How it works... OpenDaylight has a model-driven architecture, which means that all of its components are modeled using YANG. While installing features, OpenDaylight loads YANG models, making them available within the MD-SAL datastore. YangUI is a representation of this datastore. Each schema represents a subtree based on the name of the module and its revision-date. YangUI aggregates and parses all those models. It also acts as a REST client; through its web interface we can execute functions such as GET, POST, PUT, and DELETE. There's more… The example shown previously can be improved on, as there was no user yang model loaded. For instance, if you mount a NETCONF device containing its own yang model, you could interact with it through YangUI. You would use the config datastore to push/update some data, and you would see the operational datastore updated accordingly. In addition, accessing your data would be much easier than having to define the exact URL. See also Using API doc as a REST API client. Summary Throughout this article, we learned recipes such as connecting OpenFlow switches, mounting a NETCONF device, browsing data models with Yang UI. Resources for Article: Further resources on this subject: Introduction to SDN - Transformation from legacy to SDN [article] The OpenFlow Controllers [article] Introduction to SDN - Transformation from legacy to SDN [article]
Read more
  • 0
  • 0
  • 42607

article-image-cloud-computing-trends-in-2019
Guest Contributor
07 Jan 2019
8 min read
Save for later

Cloud computing trends in 2019

Guest Contributor
07 Jan 2019
8 min read
Cloud computing is a rapidly growing technology that many organizations are adopting to enable their digital transformation. As per the latest Gartner report, the cloud tech services market is projected to grow 17.3% ($206 billion) in 2019, up from $175.8 billion in 2018 and by 2022, 90% of organizations will be using cloud services. In today’s world, Cloud technology is a trending buzzword among business environments. It provides exciting new opportunities for businesses to compete on a global scale and is redefining the way we do business. It enables a user to store and share data like applications, files, and more to remote locations. These features have been realized by all business owners, from startup to well-established organizations, and they have already started using cloud computing. How Cloud technology helps businesses Reduced Cost One of the most obvious advantages small businesses can get by shifting to the cloud is saving money. It can provide small business with services at affordable and scalable prices. Virtualization expands the value of physical equipment, which means companies can achieve more with less. Therefore, an organization can see a significant decline in power consumption, rack space, IT requirements, and more. As a result, there is lower maintenance, installation, hardware, support & upgrade costs. For small businesses, particularly, those savings are essential. Enhanced Flexibility Cloud can access data and related files from any location and from any device at any time with an internet connection. As the working process is changing to flexible and remote working, it is essential to provide work-related data access to employees, even when they are not at a workplace. Cloud computing not only helps employees to work outside of the office premises but also allows employers to manage their business as and when required. Also, enhanced flexibility & mobility in cloud technology can lead to additional cost savings. For example, an employer can select to execute BYOD (bring your own device). Therefore, employees can bring and work on their own devices which they are comfortable in.. Secured Data Improved data security is another asset of cloud computing. With traditional data storage systems, the data can be easily stolen or damaged. There can also be more chances for serious cyber attacks like viruses, malware, and hacking. Human errors and power outages can also affect data security. However, if you use cloud computing, you will get the advantages of improved data security. In the cloud, the data is protected in various ways such as anti-virus, encryption methods, and many more. Additionally, to reduce the chance of data loss, the cloud services help you to remain in compliance with HIPAA, PCI, and other regulations. Effective Collaboration Effective collaboration is possible through the cloud which helps small businesses to track and oversee workflow and progress for effective results. There are many cloud collaboration tools available in the market such as Google Drive, Salesforce, Basecamp, Hive, etc. These tools allow users to create, edit, save and share documents for workplace collaboration. A user can also constrain the access of these materials. Greater Integration Cloud-based business solutions can create various simplified integration opportunities with numerous cloud-based providers. They can also get benefits of specialized services that integrate with back-office operations such as HR, accounting, and marketing. This type of integration makes business owners concentrate on the core areas of a business. Scalability One of the great aspects of cloud-based services is their scalability. Currently, a small business may require limited storage, mobility, and more. But in future, needs & requirements will increase significantly in parallel with the growth of the business.  Considering that growth does not always occur linearly, cloud-based solutions can accommodate all sudden and increased requirements of the organization. Cloud-based services have the flexibility to scale up or to scale down. This feature ensures that all your requirements are served according to your budget plans. Cloud Computing Trends in 2019 Hybrid & Multi-Cloud Solutions Hybrid Cloud will become the dominant business model in the future. For organizations, the public cloud cannot be a good fit for all type of solutions and shifting everything to the cloud can be a difficult task as they have certain requirements. The Hybrid Cloud model offers a transition solution that blends the current on-premises infrastructure with open cloud & private cloud services. Thus, organizations will be able to shift to the cloud technology at their own pace while being effective and flexible. Multi-Cloud is the next step in the cloud evolution. It enables users to control and run an application, workload, or data on any cloud (private, public and hybrid) based on their technical requirements. Thus, a company can have multiple public and private clouds or multiple hybrid clouds, all either connected together or not. We can expect multi-cloud strategies to dominate in the coming days. Backup and Disaster Recovery According to Spiceworks report,  15% of the cloud budget is allocated to Backup and Disaster Recovery (DR) solutions, which is the highest budget allocation, followed by email hosting and productivity tools. This huge percentage impacts the shared responsibility model that public cloud providers operate on. Public cloud providers, like as AWS (Amazon Web Services ), Microsoft Azure, Google Cloud are responsible for the availability of Backup and DR solutions and security of the infrastructure, while the users are in charge for their data protection and compliance. Serverless Computing Serverless Computing is gaining more popularity and will continue to do so in 2019. It is a procedure utilized by Cloud users, who request a container PaaS (Platform as a Service), and Cloud supplier charges for the PaaS as required. The customer does not need to buy or rent services before and doesn't need to configure them. The Cloud is responsible for providing the platform, it’s configuration, and a wide range of helpful tools for designing applications, and working with data. Data Containers The process of Data Container usage will become easier in 2019. Containers are more popular for transferring data, they store and organize virtual objects, and resolve the issues of having software run reliably while transferring the data from one system to another. However, there are some confinements. While containers are used to transport, they can only be used with servers having compatible operating system “kernels.” Artificial Intelligence Platforms The utilization of AI to process Big Data is one of the more important upgrades in collecting business intelligence data and giving a superior comprehension of how business functions. AI platform supports a faster, more effective, and more efficient approach to work together with data scientists and other team members. It can help to reduce costs in a variety of ways, such as making simple tasks automated, preventing the duplication of effort, and taking over some expensive labor tasks, such as copying or extraction of data. Edge computing Edge computing is a systematic approach to execute data processing at the edge of the network to streamline cloud computing. It is a result of ever increased use of IoT devices. Edge is essential to run real-time services as it streamlines the flow of traffic from IoT devices and provides real-time data analytics and analysis. Hence, it is also on the rise in 2019. Service mesh Service mesh is a dedicated system layer to enhance service to service communication across microservices applications. It's a new and emerging class of service management for the inter-microservice communication complexity and provides observability and tracing in a seamless way. As containers become more prevalent for cloud-based application development, the requirement for service mesh is increasing significantly. Service meshes can help oversee traffic through service discovery, load balancing, routing, and observability. Service meshes attempt to diminish the complexity of containers and improve network functionality. Cloud Security As we see the rise in technology, security is obviously another serious consideration. With the introduction of the GDPR (General Data Protection Regulation) security concerns have risen much higher and are the essential thing to look after. Many businesses are shifting to cloud computing without any serious consideration of its security compliance protocols. Therefore, GDPR will be an important thing in 2019 and the organization must ensure that their data practices are both safe and compliant. Conclusion As we discussed above, cloud technology is capable of providing better data storage, data security, collaboration, and it also changes the workflow to help small business owners to take better decisions. Finally, cloud connectivity is all about convenience, and streamlining workflow to help any business become more flexible, efficient, productive, and successful. If you want to set your business up for success, this might be the time to transition to cloud-based services. Author Bio Amarendra Babu L loves pursuing excellence through writing and has a passion for technology. He is presently working as a content contributor for Mindmajix.com and Tekslate.com. He is a tech-geek and love to explore new opportunities. His work has been published on various sites related to Big Data, Business Analytics & Intelligence, Blockchain, Cloud Computing, Data Science, AI & ML, Project Management, and more. You can reach him at amarendrabl18@gmail.com. He is also available on Linkedin. 8 programming languages to learn in 2019 18 people in tech every programmer and software engineer need to follow in 2019 We discuss the key trends for web and app developers in 2019 [Podcast]
Read more
  • 0
  • 0
  • 42518

article-image-harrison-ferrone-why-c-preferred-programming-language-building-games-unity
Sugandha Lahoti
16 Dec 2019
6 min read
Save for later

Harrison Ferrone explains why C# is the preferred programming language for building games in Unity

Sugandha Lahoti
16 Dec 2019
6 min read
C# is one of the most popular programming languages which is used to create games in the Unity game engine. Experiences (games, AR/VR apps, etc) built with Unity have reached nearly 3 billion devices worldwide and were installed 24 billion times in the last 12 months. We spoke to Harrison Ferrone, software engineer, game developer, creative technologist and author of the book, “Learning C# by Developing Games with Unity 2019”. We talked about why C# is used for game designing, the recent Unity 2019.2 release, and some tips and tricks tips for those developing games with Unity. On C# and Game development Why is C# is widely-used to create games? How does it compare to C++? How is C# being used in other areas such as mobile and web development? I think Unity chose to move forward with C# instead of Javascript or Boo because of its learning curve and its history with Microsoft. [Boo was one of the three scripting languages for the Unity game engine until it was dropped in 2014]. In my experience, C# is easier to learn than languages like C++, and that accessibility is a huge draw for game designers and programmers in general. With Xamarin mobile development and ASP.NET web applications in the mix, there’s really no stopping the C# language any time soon. What are C# scripts? How are they useful for creating games with Unity? C# scripts are the code files that store behaviors in Unity, powering everything the engine does. While there are a lot of new tools that will allow a developer to make a game without them, scripts are still the best way to create custom actions and interactions within a game space. Editor’s Tip: To get started with how to create a C# script in Unity, you can go through Chapter 1 of Harrison Ferrone’s book Learning C# by Developing Games with Unity 2019. On why Harrison wrote his book, Learning C# by Developing Games with Unity 2019 Tell us the motivation behind writing your book Learning C# by Developing Games with Unity 2019. Why is developing Unity games a good way to learn the C# programming language? Why do you prefer Unity over other game engines? My main motivation for writing the book was two-fold. First, I always wanted to be a writer, so marrying my love for technology with a lifelong dream was a no-brainer. Second, I wanted to write a beginner’s book that would stay true to a beginner audience, always keeping them in mind. In terms of choosing games as a medium for learning, I’ve found that making something interesting and novel while learning a new skill-set leads to greater absorption of the material and more overall enjoyment. Unity has always been my go-to engine because its interface is highly intuitive and easy to get started with. You have 3 years of experience building iOS applications in Swift. You also have a number of articles and tutorials on the same on the Ray Wenderlich website. Recently, you started branching out into C++ and Unreal Engine 4. How did you get into game design and Unity development? What made you interested in building games?  I actually got into Game design and Unity development first, before all the iOS and Swift experience. It was my major in university, and even though I couldn’t find a job in the game industry right after I graduated, I still held onto it as a passion. On developing games The latest release of Unity, Unity 2019.2 has a number of interesting features such as ProBuilder, Shader Graph, and effects, 2D Animation, Burst Compiler, etc. What are some of your favorite features in this release? What are your expectations from Unity 2019.3?  I’m really excited about ProBuilder in this release, as it’s a huge time saver for someone as artistically challenged as I am. I think tools like this will level the playing field for independent developers who may not have access to the environment or level builders. What are some essential tips and tricks that a game developer must keep in mind when working in Unity? What are the do’s and don’ts? I’d say the biggest thing to keep in mind when working with Unity is the component architecture that it’s built on. When you’re writing your own scripts, think about how they can be separated into their individual functions and structure them like that - with purpose. There’s nothing worse than having a huge, bloated C# script that does everything under the sun and attaching it to a single game object in your project, then realizing it really needs to be separated into its component parts. What are the biggest challenges today in the field of game development? What is your advice for those developing games using C#? Reaching the right audience is always challenge number one in any industry, and game development is no different. This is especially true for indie game developers as they have to always be mindful of who they are making their game for and purposefully design and program their games accordingly. As far as advice goes, I always say the same thing - learn design patterns and agile development methodologies, they will open up new avenues for professional programming and project management. Rust has been touted as one of the successors of the C family of languages. The present state of game development in Rust is also quite encouraging. What are your thoughts on Rust for game dev? Do you think major game engines like Unity and Unreal will support Rust for game development in the future? I don’t have any experience with Rust, but major engines like Unity and Unreal are unlikely to adopt a new language because of the huge cost associated with a changeover of that magnitude. However, that also leaves the possibility open for another engine to be developed around Rust in the future that targets games, mobile, and/or web development. About the Author Harrison Ferrone was born in Chicago, IL, and raised all over. Most days, you can find him creating instructional content for LinkedIn Learning and Pluralsight, or tech editing for the Ray Wenderlich website. After a few years as an iOS developer at small start-ups, and one Fortune 500 company, he fell into a teaching career and never looked back. Throughout all this, he's bought many books, acquired a few cats, worked abroad, and continually wondered why Neuromancer isn't on more course syllabi. You can follow him on Linkedin, and GitHub.
Read more
  • 0
  • 0
  • 42463

article-image-nhibernate-30-testing-using-nhibernate-profiler-and-sqlite
Packt
06 Oct 2010
6 min read
Save for later

NHibernate 3.0: Testing Using NHibernate Profiler and SQLite

Packt
06 Oct 2010
6 min read
  NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on NHibernate, see here.) Using NHibernate Profiler NHibernate Profiler from Hibernating Rhinos is the number one tool for analyzing and visualizing what is happening inside your NHibernate application, and for discovering issues you may have. In this recipe, I'll show you how to get up and running with NHibernate Profiler. Getting ready Download NHibernate Profiler from http://nhprof.com, and unzip it. As it is a commercial product, you will also need a license file. You may request a 30-day trial license from the NHProf website. Using our Eg.Core model, set up a new NHibernate console application with log4net. (Download code). How to do it... Add a reference to HibernatingRhinos.Profiler.Appender.dll from the NH Profiler download. In the session-factory element of App.config, set the property generate_statistics to true. Add the following code to your Main method: log4net.Config.XmlConfigurator.Configure();HibernatingRhinos.Profiler.Appender. NHibernate.NHibernateProfiler.Initialize();var nhConfig = new Configuration().Configure();var sessionFactory = nhConfig.BuildSessionFactory();using (var session = sessionFactory.OpenSession()){ var books = from b in session.Query<Book>() where b.Author == "Jason Dentler" select b; foreach (var book in books) Console.WriteLine(book.Name);} Run NHProf.exe from the NH Profiler download, and activate the license. Build and run your console application. Check the NH Profiler. It should look like the next screenshot. Notice the gray dots indicating alerts next to the Session #1 and Recent Statements. Select Session #1 from the Sessions list at the top left pane. Select the statement from the top right pane. Notice the SQL statement in the following screenshot: Click on See the 1 row(s) resulting from this statement. Enter your database connection string in the field provided, and click on OK. Close the query results window. Switch to the Alerts tab, and notice the alert: Use of implicit transaction is discouraged. Click on the Read more link for more information and suggested solutions to this particular issue. Switch to the Stack Trace tab, as shown in the next screenshot: Double-click on the NHProfTest.NHProfTest.Program.Main stack frame to jump to that location inside Visual Studio. Using the following code, wrap the foreach loop in a transaction and commit the transaction: using (var tx = session.BeginTransaction()){ foreach (var book in books) Console.WriteLine(book.Name); tx.Commit();} In NH Profiler, right-click on Sessions on the top left pane, and select Clear All Sessions. Build and run your application. Check NH Profiler for alerts. How it works... NHibernate Profiler uses a custom log4net appender to capture data about NHibernate activities inside your application and transmit that data to the NH Profiler application. Setting generate_statistics allows NHibernate to capture many key data points. These statistics are displayed in the lower, left-hand side of the pane of NHibernate Profiler. We initialize NHibernate Profiler with a call to NHibernateProfiler.Initialize(). For best results, do this when your application begins, just after you have configured log4net. There's more... NHibernate Profiler also supports offline and remote profiling, as well as command-line options for use with build scripts and continuous integration systems. In addition to NHibernate warnings and errors, NH Profiler alerts us to 12 common misuses of NHibernate, which are as follows: Transaction disposed without explicit rollback or commit: If no action is taken, transactions will rollback when disposed. However, this often indicates a missing commit rather than a desire to rollback the transaction Using a single session on multiple threads is likely a bug: A Session should only be used by one thread at a time. Sharing a session across threads is usually a bug, not an explicit design choice with proper locking. Use of implicit transaction is discouraged: Nearly all session activity should happen inside an NHibernate transaction. Excessive number of rows: In nearly all cases, this indicates a poorly designed query or bug. Large number of individual writes: This indicates a failure to batch writes, either because adonet.batch_size is not set, or possibly because an Identity-type POID generator is used, which effectively disables batching. Select N+1: This alert indicates a particular type of anti-pattern where, typically, we load and enumerate a list of parent objects, lazy-loading their children as we move through the list. Instead, we should eagerly fetch those children before enumerating the list Superfluous updates, use inverse="true": NH Profiler detected an unnecessary update statement from a bi-directional one-to-many relationship. Use inverse="true" on the many side (list, bag, set, and others) of the relationship to avoid this. Too many cache calls per session: This alert is targeted particularly at applications using a distributed (remote) second-level cache. By design, NHibernate does not batch calls to the cache, which can easily lead to hundreds of slow remote calls. It can also indicate an over reliance on the second-level cache, whether remote or local. Too many database calls per session: This usually indicates a misuse of the database, such as querying inside a loop, a select N+1 bug, or an excessive number of writes. Too many joins: A query contains a large number of joins. When executed in a batch, multiple simple queries with only a few joins often perform better than a complex query with many joins. This alert can also indicate unexpected Cartesian products. Unbounded result set: NH Profiler detected a query without a row limit. When the application is moved to production, these queries may return huge result sets, leading to catastrophic performance issues. As insurance against these issues, set a reasonable maximum on the rows returned by each query Different parameter sizes result in inefficient query plan cache usage: NH Profiler detected two identical queries with different parameter sizes. Each of these queries will create a query plan. This problem grows exponentially with the size and number of parameters used. Setting prepare_sql to true allows NHibernate to generate queries with consistent parameter sizes. See also Configuring NHibernate with App.config Configuring log4net logging
Read more
  • 0
  • 0
  • 42449

article-image-multithreading-qt
Packt
16 Nov 2016
13 min read
Save for later

Multithreading with Qt

Packt
16 Nov 2016
13 min read
Qt has its own cross-platform implementation of threading. In this article by Guillaume Lazar and Robin Penea, authors of the book Mastering Qt 5, we will study how to use Qt and the available tools provided by the Qt folks. (For more resources related to this topic, see here.) More specifically, we will cover the following: Understanding the QThread framework in depth The worker model and how you can offload a process from the main thread An overview of all the available threading technologies in Qt Discovering QThread Qt provides a sophisticated threading system. We assume that you already know threading basics and the associated issues (deadlocks, threads synchronization, resource sharing, and so on) and we will focus on how Qt implements it. The QThread is the central class for of the Qt threading system. A QThread instance manages one thread of execution within the program. You can subclass QThread to override the run() function, which will be executed in the QThread class. Here is how you can create and start a QThread: QThread thread; thread.start(); The start() function calling will automatically call the run() function of thread and emit the started() signal. Only at this point, the new thread of execution will be created. When run() is completed, thread will emit the finished() signal. This brings us to a fundamental aspect of QThread: it works seamlessly with the signal/slot mechanism. Qt is an event-driven framework, where a main event loop (or the GUI loop) processes events (user input, graphical, and so on) to refresh the UI. Each QThread comes with its own event loop that can process events outside the main loop. If not overridden, run() calls the QThread::exec() function, which starts the thread's event loop. You can also override QThread and call exec(), as follows: class Thread : public QThread { Q_OBJECT protected: void run() { Object* myObject = new Object(); connect(myObject, &Object::started, this, &Thread::doWork); exec(); } private slots: void doWork(); }; The started()signal will be processed by the Thread event loop only upon the exec() call. It will block and wait until QThread::exit() is called. A crucial thing to note is that a thread event loop delivers events for all QObject classes that are living in that thread. This includes all objects created in that thread or moved to that thread. This is referred to as the thread affinity of an object. Here's an example: class Thread : public QThread { Thread() : mObject(new QObject()) { } private : QObject* myObject; }; // Somewhere in MainWindow Thread thread; thread.start(); In this snippet, myObject is constructed in the Thread constructor, which is created in turn in MainWindow. At this point, thread is living in the GUI thread. Hence, myObject is also living in the GUI thread. An object created before a QCoreApplication object has no thread affinity. As a consequence, no event will be dispatched to it. It is great to be able to handle signals and slots in our own QThread, but how can we control signals across multiple threads? A classic example is a long running process that is executed in a separate thread that has to notify the UI to update some state: class Thread : public QThread { Q_OBJECT void run() { // long running operation emit result("I <3 threads"); } signals: void result(QString data); }; // Somewhere in MainWindow Thread* thread = new Thread(this); connect(thread, &Thread::result, this, &MainWindow::handleResult); connect(thread, &Thread::finished, thread, &QObject::deleteLater); thread->start(); Intuitively, we assume that the first connect function sends the signal across multiple threads (to have a result available in MainWindow::handleResult), whereas the second connect function should work on thread's event loop only. Fortunately, this is the case due to a default argument in the connect() function signature: the connection type. Let's see the complete signature: QObject::connect( const QObject *sender, const char *signal, const QObject *receiver, const char *method, Qt::ConnectionType type = Qt::AutoConnection) The type variable takes Qt::AutoConnection as a default value. Let's review the possible values of Qt::ConectionType enum as the official Qt documentation states: Qt::AutoConnection: If the receiver lives in the thread that emits the signal, Qt::DirectConnection is used. Otherwise, Qt::QueuedConnection is used. The connection type is determined when the signal is emitted. Qt::DirectConnection: This slot is invoked immediately when the signal is emitted. The slot is executed in the signaling thread. Qt::QueuedConnection: The slot is invoked when control returns to the event loop of the receiver's thread. The slot is executed in the receiver's thread. Qt::BlockingQueuedConnection: This is the same as Qt::QueuedConnection, except that the signaling thread blocks until the slot returns. This connection must not be used if the receiver lives in the signaling thread or else the application will deadlock. Qt::UniqueConnection: This is a flag that can be combined with any one of the preceding connection types, using a bitwise OR element. When Qt::UniqueConnection is set, QObject::connect() will fail if the connection already exists (that is, if the same signal is already connected to the same slot for the same pair of objects). When using Qt::AutoConnection, the final ConnectionType is resolved only when the signal is effectively emitted. If you look again at our example, the first connect(): connect(thread, &Thread::result, this, &MainWindow::handleResult); When the result() signal will be emitted, Qt will look at the handleResult() thread affinity, which is different from the thread affinity of the result() signal. The thread object is living in MainWindow (remember that it has been created in MainWindow), but the result() signal has been emitted in the run() function, which is running in a different thread of execution. As a result, a Qt::QueuedConnection function will be used. We will now take a look at the second connect(): connect(thread, &Thread::finished, thread, &QObject::deleteLater); Here, deleteLater() and finished() live in the same thread, therefore, a Qt::DirectConnection will be used. It is crucial that you understand that Qt does not care about the emitting object thread affinity, it looks only at the signal's "context of execution." Loaded with this knowledge, we can take another look at our first QThread example to have a complete understanding of this system: class Thread : public QThread { Q_OBJECT protected: void run() { Object* myObject = new Object(); connect(myObject, &Object::started, this, &Thread::doWork); exec(); } private slots: void doWork(); }; When Object::started() is emitted, a Qt::QueuedConnection function will be used. his is where your brain freezes. The Thread::doWork() function lives in another thread than Object::started(), which has been created in run(). If the Thread has been instantiated in the UI Thread, this is where doWork() would have belonged. This system is powerful but complex. To make things more simple, Qt favors the worker model. It splits the threading plumbing from the real processing. Here is an example: class Worker : public QObject { Q_OBJECT public slots: void doWork() { emit result("workers are the best"); } signals: void result(QString data); }; // Somewhere in MainWindow QThread* thread = new Thread(this); Worker* worker = new Worker(); worker->moveToThread(thread); connect(thread, &QThread::finished, worker, &QObject::deleteLater); connect(this, &MainWindow::startWork, worker, &Worker::doWork); connect(worker, &Worker::resultReady, this, handleResult); thread->start(); // later on, to stop the thread thread->quit(); thread->wait(); We start by creating a Worker class that has the following: A doWork()slot that will have the content of our old QThread::run() function A result()signal that will emit the resulting data Next, in MainWindow, we create a simple thread and an instance of Worker. The worker->moveToThread(thread) function is where the magic happens. It changes the affinity of the worker object. The worker now lives in the thread object. You can only push an object from your current thread to another thread. Conversely, you cannot pull an object that lives in another thread. You cannot change the thread affinity of an object if the object does not live in your thread. Once thread->start() is executed, we cannot call worker->moveToThread(this) unless we are doing it from this new thread. After that, we will use three connect() functions: We handle worker life cycle by reaping it when the thread is finished. This signal will use a Qt::DirectConnection function. We start the Worker::doWork() upon a possible UI event. This signal will use a Qt::QueuedConnection. We process the resulting data in the UI thread with handleResult(). This signal will use a Qt::QueuedConnection. To sum up, QThread can be either subclassed or used in conjunction with a worker class. Generally, the worker approach is favored because it separates more cleanly the threading affinity plumbing from the actual operation you want to execute in parallel. Flying over Qt multithreading technologies Built upon QThread, several threading technologies are available in Qt. First, to synchronize threads, the usual approach is to use a mutual exclusion (mutex) for a given resource. Qt provides it by the mean of the QMutex class. Its usage is straightforward: QMutex mutex; int number = 1; mutex.lock(); number *= 2; mutex.unlock(); From the mutex.lock() instruction, any other thread trying to lock the mutex object will wait until mutex.unlock() has been called. The locking/unlocking mechanism is error prone in complex code. You can easily forget to unlock a mutex in a specific exit condition, causing a deadlock. To simplify this situation, Qt provides a QMutexLocker that should be used where the QMutex needs to be locked: QMutex mutex; QMutexLocker locker(&mutex); int number = 1; number *= 2; if (overlyComplicatedCondition) { return; } else if (notSoSimple) { return; } The mutex is locked when the locker object is created, and it will be unlocked when locker is destroyed, for example, when it goes out of scope. This is the case for every condition we stated where the return statement appears. It makes the code simpler and more readable. If you need to create and destroy threads frequently, managing QThread instances by hand can become cumbersome. For this, you can use the QThreadPool class, which manages a pool of reusable QThreads. To execute code within threads managed by a QThreadPool, you will use a pattern very close to the worker we covered earlier. The main difference is that the processing class has to extend the QRunnable class. Here is how it looks: class Job : public QRunnable { void run() { // long running operation } } Job* job = new Job(); QThreadPool::globalInstance()->start(job); Just override the run() function and ask QThreadPool to execute your job in a separate thread. The QThreadPool::globalInstance() function is a static helper function that gives you access to an application global instance. You can create your own QThreadPool class if you need to have a finer control over the QThreadPool life cycle. Note that QThreadPool::start() takes the ownership of the job object and will automatically delete it when run() finishes. Watch out, this does not change the thread affinity like QObject::moveToThread() does with workers! A QRunnable class cannot be reused, it has to be a freshly baked instance. If you fire up several jobs, QThreadPool automatically allocates the ideal number of threads based on the core count of your CPU. The maximum number of threads that the QThreadPool class can start can be retrieved with QThreadPool::maxThreadCount(). If you need to manage threads by hand, but you want to base it on the number of cores of your CPU, you can use the handy static function, QThreadPool::idealThreadCount(). Another approach to multithreaded development is available with the Qt Concurrent framework. It is a higher level API that avoids the use of mutexes/locks/wait conditions and promotes the distribution of the processing among CPU cores. Qt Concurrent relies of the QFuture class to execute a function and expect a result later on: void longRunningFunction(); QFuture<void> future = QtConcurrent::run(longRunningFunction); The longRunningFunction() will be executed in a separated thread obtained from the default QThreadPool class. To pass parameters to a QFuture class and retrieve the result of the operation, use the following code: QImage processGrayscale(QImage& image); QImage lenna; QFuture<QImage> future = QtConcurrent::run(processGrayscale, lenna); QImage grayscaleLenna = future.result(); Here, we pass lenna as a parameter to the processGrayscale() function. Because we want a QImage as a result, we declare QFuture with the template type QImage. After that, future.result() blocks the current thread and waits for the operation to be completed to return the final QImage template type. To avoid blocking, QFutureWatcher comes to the rescue: QFutureWatcher<QImage> watcher; connect(&watcher, &QFutureWatcher::finished, this, &QObject::handleGrayscale); QImage processGrayscale(QImage& image); QImage lenna; QFuture<QImage> future = QtConcurrent::run(processImage, lenna); watcher.setFuture(future); We start by declaring a QFutureWatcher with the template argument matching the one used for QFuture. Then, simply connect the QFutureWatcher::finished signal to the slot you want to be called when the operation has been completed. The last step is to the tell the watcher to watch the future object with watcher.setFuture(future). This statement looks almost like it's coming from a science fiction movie. Qt Concurrent also provides a MapReduce and FilterReduce implementation. MapReduce is a programming model that basically does two things: Map or distribute the processing of datasets among multiple cores of the CPU Reduce or aggregate the results to provide it to the caller check styleThis technique has been first promoted by Google to be able to process huge datasets within a cluster of CPU. Here is an example of a simple Map operation: QList images = ...; QImage processGrayscale(QImage& image); QFuture<void> future = QtConcurrent::mapped( images, processGrayscale); Instead of QtConcurrent::run(), we use the mapped function that takes a list and the function to apply to each element in a different thread each time. The images list is modified in place, so there is no need to declare QFuture with a template type. The operation can be made a blocking operation using QtConcurrent::blockingMapped() instead of QtConcurrent::mapped(). Finally, a MapReduce operation looks like this: QList images = ...; QImage processGrayscale(QImage& image); void combineImage(QImage& finalImage, const QImage& inputImage); QFuture<void> future = QtConcurrent::mappedReduced( images, processGrayscale, combineImage); Here, we added a combineImage() that will be called for each result returned by the map function, processGrayscale(). It will merge the intermediate data, inputImage, into the finalImage. This function is called only once at a time per thread, so there is no need to use a mutex object to lock the result variable. The FilterReduce reduce follows exactly the same pattern, the filter function simply allows to filter the input list instead of transforming it. Summary In this article, we discovered how a QThread works and you learned how to efficiently use tools provided by Qt to create a powerful multi-threaded application. Resources for Article: Further resources on this subject: QT Style Sheets [article] GUI Components in Qt 5 [article] DOM and QTP [article]
Read more
  • 0
  • 1
  • 42447
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-implementing-routing-with-react-router-and-graphql-tutorial
Bhagyashree R
19 May 2019
15 min read
Save for later

Implementing routing with React Router and GraphQL [Tutorial]

Bhagyashree R
19 May 2019
15 min read
Routing is essential to most web applications. You cannot cover all of the features of your application in just one page. It would be overloaded, and your user would find it difficult to understand. Sharing links to pictures, profiles, or posts is also very important for a social network such as Graphbook. It is also crucial to split content into different pages, due to search engine optimization (SEO). This article is taken from the book Hands-on Full-Stack Web Development with GraphQL and React by Sebastian Grebe. This book will guide you in implementing applications by using React, Apollo, Node.js, and SQL. By the end of the book, you will be proficient in using GraphQL and React for your full-stack development requirements. To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will learn how to do client-side routing in a React application. We will cover the installation of React Router, implement routes, create user profiles with GraphQL backend, and handle manual navigation. Installing React Router We will first start by installing and configuring React Router 4 by running npm: npm install --save react-router-dom From the package name, you might assume that this is not the main package for React. The reason for this is that React Router is a multi-package library. That comes in handy when using the same tool for multiple platforms. The core package is called react-router. There are two further packages. The first one is the react-router-dom package, which we installed in the preceding code, and the second one is the react-router-native package. If at some point, you plan to build a React Native app, you can use the same routing, instead of using the browser's DOM for a real mobile app. The first step that we will take introduces a simple router to get our current application working, including different paths for all of the screens. There is one thing that we have to prepare before continuing. For development, we are using the webpack development server. To get the routing working out of the box, we will add two parameters to the webpack.client.config.js file. The devServer field should look as follows: devServer: { port: 3000, open: true, historyApiFallback: true, }, The historyApiFallback field tells the devServer to serve the index.html file, not only for the root path, http://localhost:3000/ but also when it would typically receive a 404 error. That happens when the path does not match a file or folder that is normal when implementing routing. The output field at the top of the config file must have a publicPath property, as follows: output: { path: path.join(__dirname, buildDirectory), filename: 'bundle.js', publicPath: '/', }, The publicPath property tells webpack to prefix the bundle URL to an absolute path, instead of a relative path. When this property is not included, the browser cannot download the bundle when visiting the sub-directories of our application, as we are implementing client-side routing. Implementing your first route Before implementing the routing, we will clean up the App.js file. Create a Main.js file next to the App.js file in the client folder. Insert the following code: import React, { Component } from 'react'; import Feed from './Feed'; import Chats from './Chats'; import Bar from './components/bar'; import CurrentUserQuery from './components/queries/currentUser'; export default class Main extends Component { render() { return ( <CurrentUserQuery> <Bar changeLoginState={this.props.changeLoginState}/> <Feed /> <Chats /> </CurrentUserQuery> ); }} As you might have noticed, the preceding code is pretty much the same as the logged in condition inside the App.js file. The only change is that the changeLoginState function is taken from the properties, and is not directly a method of the component itself. That is because we split this part out of the App.js and put it into a separate file. This improves reusability for other components that we are going to implement. Now, open and replace the render method of the App component to reflect those changes, as follows: render() { return ( <div> <Helmet> <title>Graphbook - Feed</title> <meta name="description" content="Newsfeed of all your friends on Graphbook" /> </Helmet> <Router loggedIn={this.state.loggedIn} changeLoginState= {this.changeLoginState}/> </div> ) } If you compare the preceding method with the old one, you can see that we have inserted a Router component, instead of directly rendering either the posts feed or the login form. The original components of the App.js file are now in the previously created Main.js file. Here, we pass the loggedIn state variable and the changeLoginState function to the Router component. Remove the dependencies at the top, such as the Chats and Feed components, because we won't use them any more thanks to the new Main component. Add the following line to the dependencies of our App.js file: import Router from './router'; To get this code working, we have to implement our custom Router component first. Generally, it is easy to get the routing running with React Router, and you are not required to separate the routing functionality into a separate file, but, that makes it more readable. To do this, create a new router.js file in the client folder, next to the App.js file, with the following content: import React, { Component } from 'react'; import LoginRegisterForm from './components/loginregister'; import Main from './Main'; import { BrowserRouter as Router, Route, Redirect, Switch } from 'react-router-dom'; export default class Routing extends Component { render() { return ( <Router> <Switch> <Route path="/app" component={() => <Main changeLoginState= {this.props.changeLoginState}/>}/> </Switch> </Router> ) }} At the top, we import all of the dependencies. They include the new Main component and the react-router package. The problem with the preceding code is that we are only listening for one route, which is /app. If you are not logged in, there will be many errors that are not covered. The best thing to do would be to redirect the user to the root path, where they can log in. Advanced routing with React Router The primary goal of this article is to build a profile page, similar to Facebook, for your users. We need a separate page to show all of the content that a single user has entered or created. Parameters in routes We have prepared most of the work required to add a new user route. Open up the router.js file again. Add the new route, as follows: <PrivateRoute path="/user/:username" component={props => <User {...props} changeLoginState={this.props.changeLoginState}/>} loggedIn={this.props.loggedIn}/> Those are all of the changes that we need to accept parameterized paths in React Router. We read out the value inside of the new user page component. Before implementing it, we import the dependency at the top of router.js to get the preceding route working: import User from './User'; Create the preceding User.js file next to the Main.js file. Like the Main component, we are collecting all of the components that we render on this page. You should stay with this layout, as you can directly see which main parts each page consists of. The User.js file should look as follows: import React, { Component } from 'react'; import UserProfile from './components/user'; import Chats from './Chats'; import Bar from './components/bar'; import CurrentUserQuery from './components/queries/currentUser'; export default class User extends Component { render() { return ( <CurrentUserQuery> <Bar changeLoginState={this.props.changeLoginState}/> <UserProfile username={this.props.match.params.username}/> <Chats /> </CurrentUserQuery> ); }} We use the CurrentUserQuery component as a wrapper for the Bar component and the Chats component. If a user visits the profile of a friend, they see the common application bar at the top. They can access their chats on the right-hand side, like in Facebook. We removed the Feed component and replaced it with a new UserProfile component. Importantly, the UserProfile receives the username property. Its value is taken from the properties of the User component. These properties were passed over by React Router. If you have a parameter, such as a username, in the routing path, the value is stored in the match.params.username property of the child component. The match object generally contains all matching information of React Router. From this point on, you can implement any custom logic that you want with this value. We will now continue with implementing the profile page. Follow these steps to build the user's profile page: Create a new folder, called user, inside the components folder. Create a new file, called index.js, inside the user folder. Import the dependencies at the top of the file, as follows: import React, { Component } from 'react'; import PostsQuery from '../queries/postsFeed'; import FeedList from '../post/feedlist'; import UserHeader from './header'; import UserQuery from '../queries/userQuery'; The first three lines should look familiar. The last two imported files, however, do not exist at the moment, but we are going to change that shortly. The first new file is UserHeader, which takes care of rendering the avatar image, the name, and information about the user. Logically, we request the data that we will display in this header through a new Apollo query, called UserQuery. Insert the code for the UserProfile component that we are building at the moment beneath the dependencies, as follows: export default class UserProfile extends Component { render() { const query_variables = { page: 0, limit: 10, username: this.props.username }; return ( <div className="user"> <div className="inner"> <UserQuery variables={{username: this.props.username}}> <UserHeader/> </UserQuery> </div> <div className="container"> <PostsQuery variables={query_variables}> <FeedList/> </PostsQuery> </div> </div> ) } } The UserProfile class is not complex. We are running two Apollo queries simultaneously. Both have the variables property set. The PostQuery receives the regular pagination fields, page and limit, but also the username, which initially came from React Router. This property is also handed over to the UserQuery, inside of a variables object. We should now edit and create the Apollo queries, before programming the profile header component. Open the postsFeed.js file from the queries folder. To use the username as input to the GraphQL query we first have to change the query string from the GET_POSTS variable. Change the first two lines to match the following code: query postsFeed($page: Int, $limit: Int, $username: String) { postsFeed(page: $page, limit: $limit, username: $username) { Add a new line to the getVariables method, above the return statement: if(typeof variables.username !== typeof undefined) { query_variables.username = variables.username; } If the custom query component receives a username property, it is included in the GraphQL request. It is used to filter posts by the specific user that we are viewing. Create a new userQuery.js file in the queries folder to create the missing query class. Import all of the dependencies and parse the new query schema with graphl-tag, as follows: import React, { Component } from 'react'; import { Query } from 'react-apollo'; import Loading from '../loading'; import Error from '../error'; import gql from 'graphql-tag'; const GET_USER = gql` query user($username: String!) { user(username: $username) { id email username avatar } }`; The preceding query is nearly the same as the currentUser query. We are going to implement the corresponding user query later, in our GraphQL API. The component itself is as simple as the ones that we created before. Insert the following code: export default class UserQuery extends Component { getVariables() { const { variables } = this.props; var query_variables = {}; if(typeof variables.username !== typeof undefined) { query_variables.username = variables.username; } return query_variables; } render() { const { children } = this.props; const variables = this.getVariables(); return( <Query query={GET_USER} variables={variables}> {({ loading, error, data }) => { if (loading) return <Loading />; if (error) return <Error><p>{error.message}</p></Error>; const { user } = data; return React.Children.map(children, function(child){ return React.cloneElement(child, { user }); }) }} </Query> ) } } We set the query property and the parameters that are collected by the getVariables method to the GraphQL Query component. The rest is the same as any other query component that we have written. All child components receive a new property, called user, which holds all the information about the user, such as their name, their email, and their avatar image. The last step is to implement the UserProfileHeader component. This component renders the user property, with all its values. It is just simple HTML markup. Copy the following code into the header.js file, in the user folder: import React, { Component } from 'react';export default class UserProfileHeader extends Component { render() { const { avatar, email, username } = this.props.user; return ( <div className="profileHeader"> <div className="avatar"> <img src={avatar}/> </div> <div className="information"> <p> {username} </p> <p> {email} </p> <p>You can provide further information here and build your really personal header component for your users.</p> </div> </div> ) }} We have finished the new front end components, but the UserProfile component is still not working. The queries that we are using here either do not accept the username parameter or have not yet been implemented. Querying the user profile With the new profile page, we have to update our back end accordingly. Let's take a look at what needs to be done, as follows: We have to add the username parameter to the schema of the postsFeed query and adjust the resolver function. We have to create the schema and the resolver function for the new UserQuery component. We will begin with the postsFeed query. Edit the postsFeed query in the RootQuery type of the schema.js file to match the following code: postsFeed(page: Int, limit: Int, username: String): PostFeed @auth Here, I have added the username as an optional parameter. Now, head over to the resolvers.js file, and take a look at the corresponding resolver function. Replace the signature of the function to extract the username from the variables, as follows: postsFeed(root, { page, limit, username }, context) { To make use of the new parameter, add the following lines of code above the return statement: if(typeof username !== typeof undefined) { query.include = [{model: User}]; query.where = { '$User.username$': username }; } In the preceding code, we fill the include field of the query object with the Sequelize model that we want to join. This allows us to filter the associated Chats model in the next step. Then, we create a normal where object, in which we write the filter condition. If you want to filter the posts by an associated table of users, you can wrap the model and field names that you want to filter by with dollar signs. In our case, we wrap User.username with dollar signs, which tells Sequelize to query the User model's table and filter by the value of the username column. No adjustments are required for the pagination part. The GraphQL query is now ready. The great thing about the small changes that we have made is that we have just one API function that accepts several parameters, either to display posts on a single user profile, or to display a list of posts like a news feed. Let's move on and implement the new user query. Add the following line to the RootQuery in your GraphQL schema: user(username: String!): User @auth This query only accepts a username, but this time it is a required parameter in the new query. Otherwise, the query would make no sense, since we only use it when visiting a user's profile through their username. In the resolvers.js file, we will now implement the resolver function using Sequelize: user(root, { username }, context) { return User.findOne({ where: { username: username } }); }, In the preceding code, we use the findOne method of the User model by Sequelize, and search for exactly one user with the username that we provided in the parameter. We also want to display the email of the user on the user's profile page. Add the email as a valid field on the User type in your GraphQL schema with the following line of code: email: String With this step, our back end code and the user page are ready. This article walked you through the installation process of React Router and how to implement a route in React. Then we moved on to more advanced stuff by implementing a user profile, similar to Facebook, with a GraphQL backend. If you found this post useful, do check out the book, Hands-on Full-Stack Web Development with GraphQL and React. This book teaches you how to build scalable full-stack applications while learning to solve complex problems with GraphQL. How to build a Relay React App [Tutorial] React vs. Vue: JavaScript framework wars Working with the Vue-router plugin for SPAs
Read more
  • 0
  • 0
  • 42442

article-image-reactos-0-4-12-releases-with-kernel-improvements-intel-e1000-nic-driver-support-and-more
Bhagyashree R
25 Sep 2019
2 min read
Save for later

ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more

Bhagyashree R
25 Sep 2019
2 min read
Earlier this week, the ReactOS team announced the release of ReactOS 0.4.12. This release comes with a bunch of kernel improvements, Intel e1000 NIC driver support, font improvements, and more. Key updates in ReactOS 0.4.12 Kernel updates The filesystem infrastructure of ReactOS 0.4.12 has received quite a few improvements to enable Microsoft filesystem drivers. The team has also worked on the common cache module that has deep ties to the memory manager. The team has also improved device power management, fixed support for PXE booting, and overhauled the write-protection functionality. Window snapping ReactOS 0.4.12 comes with support for window snapping. So, users will now be able to align windows to sides or maximize and minimize them by dragging in specific directions. This release also comes with the keyboard shortcuts that accompany this feature. Intel e1000 NIC driver ReactOS 0.4.12 has a new driver to support the Network Interface Card (NIC) out of the box. Now, end-users do not need to manually find and install a driver. This new driver will also be compatible with e1000 NICs. Improvements related to font In ReactOS 0.4.12, font rendering is made more robust and correct. This release fixes a series of problems that badly affected text rendering for buttons in a range of applications,  from iTunes to various .NET applications. User-mode DLLs In this release, the team has made a range of improvements to user-mode components. The common controls (comctl) library is used by most of the Windows applications to draw various generic user interface elements. The team has fixed an issue related to it “reading extremely dryly.” Other updates include drivers for MIDI instruments and animated rotation bar in the startup/shutdown dialog. Check out the official announcement to know more about ReactOS 0.4.12 in detail. ReactOS 0.4.11 is now out with kernel improvements, manifests support, and more! Btrfs now boots ReactOS, a free and open-source alternative for Windows NT ReactOS version 0.4.9 released with Self-hosting and FastFAT crash fixes Understanding network port numbers, TCP, UDP, and ICMP on an operating system Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report  
Read more
  • 0
  • 0
  • 42233

article-image-getting-started-aspnet-core-and-bootstrap-4
Packt
04 Nov 2016
17 min read
Save for later

Getting Started with ASP.NET Core and Bootstrap 4

Packt
04 Nov 2016
17 min read
This article is by Pieter van der Westhuizen, author of the book Bootstrap for ASP.NET MVC - Second edition. As developers, we can find it difficult to create great-looking user interfaces from scratch when using HTML and CSS. This is especially hard when developers have extensive experience developing Windows Forms applications. Microsoft introduced Web Forms to remove the complexities of building websites and to ease the switch from Windows Forms to the Web. This in turn makes it very hard for Web Forms developers, and even harder for Windows Forms developers to switch to ASP.NET MVC. Bootstrap is a set of stylized components, plugins, and a layout grid that takes care of the heavy lifting. Microsoft included Bootstrap in all ASP.NET MVC project templates since 2013. In this article, we will cover the following topics: Files included in the Bootstrap distribution How to create an empty ASP.NET site and enable MVC and static files Adding the Bootstrap files using Bower Automatically compile the Bootstrap Sass files using Gulp Installing additional icon sets How to create a layout file that references the Bootstrap files (For more resources related to this topic, see here.) Files included in the Bootstrap distribution In order to get acquainted with the files inside the Bootstrap distribution, you need to download its source files. At the time of writing, Bootstrap 4 was still in Alpha, and its source files can be downloaded from http://v4-alpha.getbootstrap.com. Bootstrap style sheets Do not be alarmed by the amount of files inside the css folder. This folder contains four .css files and two .map files. We only need to include the bootstrap.css file in our project for the Bootstrap styles to be applied to our pages. The bootstrap.min.css file is simply a minified version of the aforementioned file. The .map files can be ignored for the project we'll be creating. These files are used as a type of debug symbol (similar to the .pdb files in Visual Studio), which allows developers to live edit their preprocessor source files. Bootstrap JavaScript files The js folder contains two files. All the Bootstrap plugins are contained in the bootstrap.js file. The bootstrap.min.js file is simply a minified version of the aforementioned file. Before including the file in your project, make sure that you have a reference to the jQuery library because all Bootstrap plugins require jQuery. Bootstrap fonts/icons Bootstrap 3 uses Glyphicons to display various icons and glyphs in Bootstrap sites. Bootstrap 4 will no longer ship with glyphicons included, but you still have the option to include it manually or to include your own icons. The following two icon sets are good alternatives to Glyphicons: Font Awesome, available from http://fontawesome.io/ GitHub's Octicons, available from https://octicons.github.com/ Bootstrap source files Before you can get started with Bootstrap, you first need to download the Bootstrap source files. At the time of writing, Bootstrap 4 was at version 4 Alpha 3. You have a few choices when adding Bootstrap to you project. You can download the compiled CSS and JavaScript files or you can use a number of package managers to install the Bootstrap Sass source to your project. In this article, you'll be using Bower to add the Bootstrap 4 source files to your project. For a complete list of Bootstrap 4 Alpha installation sources, visit http://v4-alpha.getbootstrap.com/getting-started/download/ CSS pre-processors CSS pre-processors process code written in a pre-processed language, such as LESS or Sass, and convert it into standard CSS, which in turn can be interpreted by any standard web browser. CSS pre-processors extend CSS by adding features that allow variables, mixins, and functions. The benefits of using CSS pre-processors are that they are not bound by any limitations of CSS. CSS pre-processors can give you more functionality and control over your style sheets and allows you to write more maintainable, flexible, and extendable CSS. CSS pre-processors can also help to reduce the amount of CSS and assist with the management of large and complex style sheets that can become harder to maintain as the size and complexity increases. In essence, CSS pre-processors such as Less and Sass enables programmatic control over your style sheets. Bootstrap moved their source files from Less to Sass with version 4.  Less and Sass are very alike in that they share a similar syntax as well as features such as variables, mixins, partials, and nesting, to name but a few. Less was influenced by Sass, and later on, Sass was influenced by Less when it adopted CSS-like block formatting, which worked very well for Less. Creating an empty ASP.NET MVC site and adding Bootstrap manually The default ASP.NET 5 project template in Visual Studio 2015 Update 3 currently adds Bootstrap 3 to the project. In order to use Bootstrap 4 in your ASP.NET project, you'll need to create an empty ASP.NET project and add the Bootstrap 4 files manually. To create a project that uses Bootstrap 4, complete the following process: In Visual Studio 2015, select New | Project from the File menu or use the keyboard shortcut Ctrl + Shift + N. From the New Project dialog window, select ASP.NET Core Web Application (.NET Core), which you'll find under Templates | Visual C# | Web. Select the Empty project template from the New ASP.NET Core Web Application (.NET Core) Project dialog window and click on OK. Enabling MVC and static files The previous steps will create a blank ASP.NET Core project. Running the project as-is will only show a simple Hello World output in your browser. In order for it to serve static files and enable MVC, we'll need to complete the following steps: Double-click on the project.json file inside the Solution Explorer in Visual Studio. Add the following two lines to the dependencies section, and save the project.json file: "Microsoft.AspNetCore.Mvc": "1.0.0", "Microsoft.AspNetCore.StaticFiles": "1.0.0" You should see a yellow colored notification inside the Visual Studio Solution Explorer with a message stating that it is busy restoring packages. Open the Startup.cs file. To enable MVC for the project, change the ConfigureServices method to the following: public void ConfigureServices(IServiceCollection services) {     services.AddMvc(); } Finally, update the Configure method to the following code: public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) {     loggerFactory.AddConsole();       if (env.IsDevelopment())     {         app.UseDeveloperExceptionPage();     }       app.UseStaticFiles();       app.UseMvc(routes =>     {         routes.MapRoute(             name: "default",             template: "{controller=Home}/{action=Index}/{id?}");     }); } The preceding code will enable logging and the serving of static files such as images, style sheets, and JavaScript files. It will also set the default MVC route. Creating the default route controller and view When creating an empty ASP.NET Core project, no default controller or views will be created by default. In the previous steps, we've created a default route to the Index action of the Home controller. In order for this to work, we first need to complete the following steps: In the Visual Studio Solution Explorer, right-click on the project name and select Add | New Folder from the context menu. Name the new folder Controllers. Add another folder called Views. Right-click on the Controllers folder and select Add | New Item… from the context menu. Select MVC Controller Class from the Add New Item dialog, located under .NET Core | ASP.NET, and click on Add. The default name when adding a new controller will be HomeController.cs: Next, we'll need to add a subfolder for the HomeController in the Views folder. Right-click on the Views folder and select Add | New Folder from the context menu. Name the new folder Home. Right-click on the newly created Home folder and select Add | New Item… from the context menu. Select the MVC View Page item, located under .NET Core | ASP.NET; from the list, make sure the filename is Index.cshtml and click on the Add button: Your project layout should resemble the following in the Visual Studio Solution Explorer: Adding the Bootstrap 4 files using Bower With ASP.NET 5 and Visual Studio 2015, Microsoft provided the ability to use Bower as a client-side package manager. Bower is a package manager for web frameworks and libraries that is already very popular in the web development community. You can read more about Bower and search the packages it provides by visiting http://bower.io/ Microsoft's decision to allow the use of Bower and package managers other than NuGet for client-side dependencies is because it already has such a rich ecosystem. Do not fear! NuGet is not going away. You can still use NuGet to install libraries and components, including Bootstrap 4! To add the Bootstrap 4 source files to your project, you need to follow these steps: Right-click on the project name inside Visual Studio's Solution Explorer and select Add | New Item…. Under .NET Core | Client-side, select the Bower Configuration File item, make sure the filename is bower.json and click on Add, as shown here: If not already open, double-click on the bower.json file to open it and add Bootstrap 4 to the dependencies array. The code for the file should look similar to the following: {    "name": "asp.net",    "private": true,   "dependencies": {     "bootstrap": "v4.0.0-alpha.3"   } } Save the bower.json file. Once you've saved the bower.json file, Visual Studio will automatically download the dependencies into the wwwroot/lib folder of your project. In the case of Bootstrap 4 it also depends on jQuery and Tether, you'll notice that jQuery and Tether has also been downloaded as part of the Bootstrap dependency. After you've added Bootstrap to your project, your project layout should look similar to the following screenshot: Compiling the Bootstrap Sass files using Gulp When adding Bootstrap 4, you'll notice that the bootstrap folder contains a subfolder called dist. Inside the dist folder, there are ready-to-use Bootstrap CSS and JavaScript files that you can use as-is if you do not want to change any of the default Bootstrap colours or properties. However, because the source Sass files were also added, this gives you extra flexibility in customizing the look and feel of your web application. For instance, the default colour of the base Bootstrap distribution is gray; if you want to change all the default colours to shades of blue, it would be tedious work to find and replace all references to the colours in the CSS file. For example, if you open the _variables.scss file, located in wwwroot/lib/bootstrap/scss, you'll notice the following code: $gray-dark:                 #373a3c !default; $gray:                      #55595c !default; $gray-light:                #818a91 !default; $gray-lighter:              #eceeef !default; $gray-lightest:             #f7f7f9 !default; We're not going to go into too much detail regarding Sass in this article, but the $ in front of the names in the code above indicates that these are variables used to compile the final CSS file. In essence, changing the values of these variables will change the colors to the new values we've specified, when the Sass file is compiled. To learn more about Sass, head over to http://sass-lang.com/ Adding Gulp npm packages We'll need to add the gulp and gulp-sass Node packages to our solution in order to be able to perform actions using Gulp. To accomplish this, you will need to use npm. npm is the default package manager for the Node.js runtime environment. You can read more about it at https://www.npmjs.com/ To add the gulp and gulp-sass npm packages to your ASP.NET project, complete the following steps: Right-click on your project name inside the Visual Studio Solution Explorer and select Add | New Item… from the project context menu. Find the npm Configuration File item, located under .NET Core | Client-side. Keep its name as package.json and click on Add. If not already open, double-click on the newly added package.json file and add the following two dependencies to the devDependencies array inside the file: "devDependencies": {   "gulp": "3.9.1",   "gulp-sass": "2.3.2" } This will add version 3.9.1 of the gulp package and version 2.3.2 of the gulp-sass package to your project. At the time of writing, these were the latest versions. Your version numbers might differ. Enabling Gulp-Sass compilation Visual Studio does not compile Sass files to CSS by default without installing extensions, but we can enable it using Gulp. Gulp is a JavaScript toolkit used to stream client-side code through a series of processes when an event is triggered during build. Gulp can be used to automate and simplify development and repetitive tasks, such as the following: Minify CSS, JavaScript and image files Rename files Combine CSS files Learn more about Gulp at http://gulpjs.com/ Before you can use Gulp to compile your Sass files to CSS, you need to complete the following tasks: Add a new Gulp Configuration File to your project by right-cing Add | New Item… from the context menu. The location of the item is .NET Core | Client-side. Keep the filename as gulpfile.js and click on the Add button. Change the code inside the gulpfile.js file to the following: var gulp = require('gulp'); var gulpSass = require('gulp-sass'); gulp.task('compile-sass', function () {     gulp.src('./wwwroot/lib/bootstrap/scss/bootstrap.scss')         .pipe(gulpSass())         .pipe(gulp.dest('./wwwroot/css')); }); The code in the preceding step first declares that we require the gulp and gulp-sass packages, and then creates a new task called compile-sass that will compile the Sass source file located at /wwwroot/lib/bootstrap/scss/bootstrap.scss and output the result to the /wwwroot/css folder. Running Gulp tasks With the gulpfile.js properly configured, you are now ready to run your first Gulp task to compile the Bootstrap Sass to CSS. Accomplish this by completing the following steps: Right-click on gulpfile.js in the Visual Studio Solution Explorer and choose Task Runner Explorer from the context menu. You should see all tasks declared in the gulpfile.js listed underneath the Tasks node. If you do not see tasks listed, click on the Refresh button, located on the left-hand side of the Task Runner Explorer window. To run the compile-sass task, right-click on it and select Run from the context menu. Gulp will compile the Bootstrap 4 Sass files and output the CSS to the specified folder. Binding Gulp tasks to Visual Studio events Right-clicking on every task in the Task Runner Explorer in order to execute each, could involve a lot of manual steps. Luckily, Visual Studio allows us to bind tasks to the following events inside Visual Studio: Before Build After Build Clean Project Open If, for example, we would like to compile the Bootstrap 4 Sass files before building our project, simply select Before Build from the Bindings context menu of the Visual Studio Task Runner Explorer: Visual Studio will add the following line of code to the top of gulpfile.js to tell the compiler to run the task before building the project: /// <binding BeforeBuild='compile-sass' /> Installing Font Awesome Bootstrap 4 no longer comes bundled with the Glyphicons icon set. However, there are a number of free alternatives available for use with your Bootstrap and other projects. Font Awesome is a very good alternative to Glyphicons that provides you with 650 icons to use and is free for commercial use. Learn more about Font Awesome by visiting https://fortawesome.github.io/Font-Awesome/ You can add a reference to Font Awesome manually, but since we already have everything set up in our project, the quickest option is to simply install Font Awesome using Bower and compile it to the Bootstrap style sheet using Gulp. To accomplish this, follow these steps: Open the bower.json file, which is located in your project route. If you do not see the file inside the Visual Studio Solution Explorer, click on the Show All Files button on the Solution Explorer toolbar. Add font-awesome as a dependency to the file. The complete listing of the bower.json file is as follows: {   "name": "asp.net",   "private": true,   "dependencies": {     "bootstrap": "v4.0.0-alpha.3",     "font-awesome": "4.6.3"   } } Visual Studio will download the Font Awesome source files and add a font-awesome subfolder to the wwwroot/lib/ folder inside your project. Copy the fonts folder located under wwwroot/font-awesome to the wwwroot folder. Next, open the bootstrap.scss file located in the wwwroot/lib/bootstrap/scss folder and add the following line at the end of the file: $fa-font-path: "/fonts"; @import "../../font-awesome/scss/font-awesome.scss"; Run the compile-sass task via the Task Runner Explorer to recompile the Bootstrap Sass. The preceding steps will include Font Awesome in your Bootstrap CSS file, which in turn will enable you to use it inside your project by including the mark-up demonstrated here: <i class="fa fa-pied-piper-alt"></i> Creating a MVC Layout page The final step for using Bootstrap 4 in your ASP.NET MVC project is to create a Layout page that will contain all the necessary CSS and JavaScript files in order to include Bootstrap components in your pages. To create a Layout page, follow these steps: Add a new sub folder called Shared to the Views folder. Add a new MVC View Layout Page to the Shared folder. The item can be found in the .NET Core | Server-side category of the Add New Item dialog. Name the file _Layout.cshtml and click on the Add button: With the current project layout, add the following HTML to the _Layout.cshtml file: <!DOCTYPE html> <html lang="en"> <head>     <meta charset="utf-8">     <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">     <meta http-equiv="x-ua-compatible" content="ie=edge">     <title>@ViewBag.Title</title>     <link rel="stylesheet" href="~/css/bootstrap.css" /> </head> <body>     @RenderBody()       <script src="~/lib/jquery/dist/jquery.js"></script>     <script src="~/lib/bootstrap/dist/js/bootstrap.js"></script> </body> </html> Finally, add a new MVC View Start Page to the Views folder called _ViewStart.cshtml. The _ViewStart.cshtml file is used to specify common code shared by all views. Add the following Razor mark-up to the _ViewStart.cshtml file: @{     Layout = "_Layout"; } In the preceding mark-up, a reference to the Bootstrap CSS file that was generated using the Sass source files and Gulp is added to the <head> element of the file. In the <body> tag, the @RenderBody method is invoked using Razor syntax. Finally, at the bottom of the file, just before the closing </body> tag, a reference to the jQuery library and the Bootstrap JavaScript file is added. Note that jQuery must always be referenced before the Bootstrap JavaScript file. Content Delivery Networks You could also reference the jQuery and Bootstrap library from a Content Delivery Network (CDN). This is a good approach to use when adding references to the most widely used JavaScript libraries. This should allow your site to load faster if the user has already visited a site that uses the same library from the same CDN, because the library will be cached in their browser. In order to reference the Bootstrap and jQuery libraries from a CDN, change the <script> tags to the following: <script src="https://code.jquery.com/jquery-3.1.0.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.2/js/bootstrap.min.js"></script> There are a number of CDNs available on the Internet; listed here are some of the more popular options: MaxCDN: https://www.maxcdn.com/ Google Hosted Libraries: https://developers.google.com/speed/libraries/ CloudFlare: https://www.cloudflare.com/ Amazon CloudFront: https://aws.amazon.com/cloudfront/ Learn more about Bootstrap Frontend development with Bootstrap 4
Read more
  • 0
  • 0
  • 42197

article-image-5-blog-posts-that-could-make-you-a-better-python-programmer
Sam Wood
11 Feb 2019
2 min read
Save for later

5 blog posts that could make you a better Python programmer

Sam Wood
11 Feb 2019
2 min read
Python is one of the most important languages to master. It’s top rated, fast growing, and in demand by businesses around the globe. There’s a host of excellent insight across the web about how to become a better programmer with Python. Here’s five blogs we think you need to read to upgrade your skills and knowledge. 1. A Brief History of Python Did you know Python is actually older than Java, R and JavaScript? If you want to be a better Python programmer, it pays to know your history. This quick blog post takes you through the language's journey from Christmas hobby project to its modern ascendancy with version 3. 2. Do you write Python Code or Pythonic Code? Are you writing code in Python, or code for Python? When people talk about Pythonic code they mean that the code uses Python idioms well, that is natural or displays fluency in the language. Are you writing code like you would write Java or C++? This 4-minute blog post gives quick tips on how to make your code Pythonic. 3. The Singleton Python Design Pattern in Depth The singleton pattern is a powerful design pattern that allows you to create only one instance of data. You’d generally use it for things like the logging class and its subclasses, managing a connection to a database, or use read-only singletons to store some global states. This in-depth blog post takes you through the three principle ways to implement singletons, for better Python code. 4. Why is Python so good for artificial intelligence and machine learning? 5 Experts Explain. Python is the breakout language of data, zooming ahead of rival R to be dominant in the field of artificial intelligence and machine learning. But what is it about the programming language that makes it so well suited for this fast-growing field? In this blog post, five artificial intelligence experts all weigh in on what they think makes Python perfect for AI and machine learning. 5. Top 7 Python Programming Books You Need To Read That’s right - we put a list in our list. But if you really want to become a better Python programmer, you’ll want to get to grips with this stack of amazing Python books. Whether you’re a complete beginner or more experienced, these seven Python titles are the perfect way to upgrade your knowledge.
Read more
  • 0
  • 0
  • 42157
article-image-wordpress-web-application-framework
Packt
15 Jun 2017
20 min read
Save for later

WordPress as a Web Application Framework

Packt
15 Jun 2017
20 min read
In this article written by Rakhitha Ratanayake, author of the book Wordpress Web Application Development - Third Edition you will learn that WordPress has matured from the most popular blogging platform to the most popular content management system. Thousands of developers around the world are making a living from WordPress design and development. As more and more people are interested in using WordPress, the dream of using this amazing framework for web application development is becoming possible. The future seems bright as WordPress hasalready got dozens of built-in features, which can be easily adapted to web application development using slight modifications. Since you are already reading this article, you have to be someone who is really excited to see how WordPress fits into web application development. Throughout this article, we will learn how we can inject the best practices of web development into WordPress framework to build web applications in rapid process.Basically, this article will be important for developers from two different perspectives. On one hand, beginner- to intermediate-level WordPress developers can get knowledge of cutting-edge web development technologies and techniques to build complex applications. On the other hand, web development experts who are already familiar with popular PHP frameworks can learn WordPress for rapid application development. So, let's get started! In this article, we will cover the following topics: WordPress as a CMS WordPress as a web application framework Simplifying development with built-in features Identifying the components of WordPress Making a development plan for forum management application Understanding limitations and sticking with guidelines Building a question-answer interface Enhancing features of the questions plugin (For more resources related to this topic, see here.) In order to work with this article, you should be familiar with WordPress themes, plugins, and its overall process. Developers who are experienced in PHP frameworks can work with this article while using the reference sources to learn WordPress. By the end of this article, you will have the ability to make the decision to choose WordPress for web development. WordPress as a CMS Way back in 2003, WordPress released its first version as a simple blogging platform and continued to improve until it became the most popular blogging tool. Later, it continued to improve as a CMS and now has a reputation for being the most popular CMS for over 5 years. These days everyone sees WordPress as a CMS rather than just a blogging tool. Now the question is, where will it go next? Recent versions of WordPress have included popular web development libraries such as Backbone.js and Underscore.js and developers are building different types of applications with WordPress. Also the most recent introduction of REST API is a major indication that WordPress is moving towards the direction of building web applications. The combination of REST API and modern JavaScript frameworks will enable developers to build complex web applications with WordPress. Before we consider the application development aspects of WordPress, it's ideal to figure out the reasons for it being such a popular CMS. The following are some of the reasons behind the success of WordPress as a CMS: Plugin-based architecture for adding independent features and the existence of over 40,000 open source plugins Ability to create unlimited free websites at www.wordpress.com and use the basic WordPress features A super simple and easy-to-access administration interface A fast learning curve and comprehensive documentation for beginners A rapid development process involving themes and plugins An active development community with awesome support Flexibility in building websites with its themes, plugins, widgets, and hooks Availability of large premium theme and plugin marketplaces for developers to sell advanced plugin/themes and users to build advanced sites with those premium plugins/themes without needing a developer. These reasons prove why WordPress is the top CMS for website development. However, experienced developers who work with full stack web applications don't believe that WordPress has a future in web application development. While it's up for debate, we'll see what WordPress has to offer for web development. Once you complete reading this article, you will be able to decide whether WordPress has a future in web applications. I have been working with full stack frameworks for several years, and I certainly believe the future of WordPress for web development. WordPress as a web application framework In practice, the decision to choose a development framework depends on the complexity of your application. Developers will tend to go for frameworks in most scenarios. It's important to figure out why we go with frameworks for web development. Here's a list of possible reasons why frameworks become a priority in web application development: Frameworks provide stable foundations for building custom functionalities Usually, stable frameworks have a large development community with an active support They have built-in features to address the common aspects of application development, such as routing, language support, form validation, user management, and more They have a large amount of utility functions to address repetitive tasks Full stack development frameworks such as Zend, CodeIgniter, and CakePHP adhere to the points mentioned in the preceding section, which in turn becomes the framework of choice for most developers. However, we have to keep in mind that WordPress is an application where we built applications on top of existing features. On the other hand, traditional frameworks are foundations used for building applications such as WordPress. Now, let's take a look at how WordPress fits into the boots of web application framework. The MVC versus event-driven architecture A vast majority of web development frameworks are built to work with the Model-View-Controller(MVC) architecture, where an application is separated into independent layers called models, views, and controllers. In MVC, we have a clear understanding of what goes where and when each of the layers will be integrated in the process. So, the first thing most developers will look at is the availability of MVC in WordPress. Unfortunately, WordPress is not built on top of the MVC architecture. This is one of the main reasons why developers refuse to choose it as a development framework. Even though it is not MVC, we can create custom execution process to make it work like a MVC application. Also, we can find frameworks such as WP MVC, which can be used to take advantage of both WordPress's native functionality and a vast plugin library and all of the many advantages of an MVC framework. Unlike other frameworks, it won't have the full capabilities of MVC. However, unavailability of the MVC architecture doesn't mean that we cannot develop quality applications with WordPress. There are many other ways to separate concerns in WordPress applications. WordPress on the other hand, relies on a procedural event-driven architecture with its action hooks and filters system. Once a user makes a request, these actions will get executed in a certain order to provide the response to the user. You can find the complete execution procedure at http://codex.wordpress.org/Plugin_API/Action_Reference. In the event-driven architecture, both model and controller code gets scattered throughout the theme and plugin files. Simplifying development with built-in features As we discussed in the previous section, the quality of a framework depends on its core features. The better the quality of the core, the better it will be for developing quality and maintainable applications. It's surprising to see the availability of number of WordPress features directly related to web development, even though it is meant to create websites. Let's get a brief introduction about the WordPress core features to see how it fits into web application development. User management Built-in user management features are quite advanced in order to cater to the most common requirements of any web application. Its user roles and capability handling makes it much easier to control the access to specific areas of your application. We can separate users into multiple levels using roles and then use capabilities to define the permitted functionality for each user level. Most full stack frameworks don't have a built-in user management features, and hence, this can be considered as an advantage of using WordPress. Media management File uploading and managing is a common and time consuming task in web applications. Media uploader, which comes built-in with WordPress, can be effectively used to automate the file-related tasks without writing much source code. A super-simple interface makes it so easy for application users to handle file-related tasks. Also, WordPress offers built-in functions for directly uploading media files without the media uploader. These functions can be used effectively to handle advanced media uploading requirements without spending much time. Template management WordPress offers a simple template management system for its themes. It is not as complex or fully featured as a typical template engine. However, it offers a wide range of capabilities in CMS development perspective, which we can extend to suit web applications. Database management In most scenarios, we will be using the existing database table structure for our application development. WordPress database management functionalities offer a quick and easy way of working with existing tables with its own style of functions. Unlike other frameworks, WordPress provides a built-in database structure, and hence most of the functionalities can be used to directly work with these tables without writing custom SQL queries. Routing Comprehensive support for routing is provided through permalinks. WordPress makes it simple to change the default routing and choose your own routing, in order to built search engine friendly URLs. XML-RPC API Building an API is essential for allowing third-party access to our application. WordPress provides built-in API for accessing CMS-related functionality through its XML-RPC interface. Also, developers are allowed to create custom API functions through plugins, making it highly flexible for complex applications. REST API REST API makes it possible to give third-party access to the application data, similar to XML-RPC API. This API uses easy to understand HTTP requests and JSON format making it easier to communicate with WordPress applications. JavaScript is becoming the modern trend in developing applications. So the availability of JSON in REST API will allow external users to access and manipulate WordPress data within their JavaScript based applications. Caching Caching in WordPress can be categorized into two sections called persistent and nonpersistent cache. Nonpersistent caching is provided by WordPress cache object while persistent caching is provided through its Transient API. Caching techniques in WordPress is a simple compared to other frameworks, but it's powerful enough to cater to complex web applications. Scheduling As developers, you might have worked with cron jobs for executing certain tasks at specified intervals. WordPress offers same scheduling functionality through built-in functions, similar to a cron job. However, WordPress cron execution is slightly different from normal cron jobs. In WordPress, cron won't be executed unless someone visits the site. Typically, it's used for scheduling future posts. However, it can be extended to cater complex scheduling functionality. Plugins and widgets The power of WordPress comes from its plugin mechanism, which allows us to dynamically add or remove functionality without interrupting other parts of the application. Widgets can be considered as a part of the plugin architecture and will be discussed in detail further in this article. Themes The design of a WordPress site comes through the theme. This site offers many built-in template files to cater to the default functionality. Themes can be easily extended for custom functionality. Also, the design of the site can be changed instantly by switching compatible theme. Actions and filters Actions and filters are part of the WordPress hook system. Actions are events that occur during a request. We can use WordPress actions to execute certain functionalities after a specific event is completed. On the other hand, filters are functions that are used to filter, modify, and return the data. Flexibility is one of the key reasons for the higher popularity of WordPress, compared to other CMS. WordPress has its own way of extending functionality of custom features as well as core features through actions and filters. These actions and filters allow the developers to build advanced applications and plugins, which can be easily extended with minor code changes. As a WordPress developer, it's a must to know the perfect use of these actions and filters in order to build highly flexible systems. The admin dashboard WordPress offers a fully featured backend for administrators as well as normal users. These interfaces can be easily customized to adapt to custom applications. All the application-related lists, settings, and data can be handled through the admin section. The overall collection of features provided by WordPress can be effectively used to match the core functionalities provided by full stack PHP frameworks. Identifying the components of WordPress WordPress comes up with a set of prebuilt components, which are intended to provide different features and functionality for an application. A flexible theme and powerful admin features act as the core of WordPress websites, while plugins and widgets extend the core with application-specific features. As a CMS, we all have a pretty good understanding of how these components fit into a WordPress website. Here our goal is to develop web applications with WordPress, and hence it is important to identify the functionality of these components in the perspective of web applications. So, we will look at each of the following components, how they fit into web applications, and how we can take advantage of them to create flexible applications through a rapid development process: The role of WordPress themes The role of admin dashboard The role of plugins The role of widgets The role of WordPress themes Most of us are used to seeing WordPress as a CMS. In its default view, a theme is a collection of files used to skin your web application layouts. In web applications, it's recommended to separate different components into layers such as models, views, and controllers. WordPress doesn't adhere to the MVC architecture. However, we can easily visualize themes or templates as the presentation layer of WordPress. In simple terms, views should contain the HTML needed to generate the layout and all the data it needs, should be passed to the views. WordPress is built to create content management systems, and hence, it doesn't focus on separating views from its business logic. Themes contain views, also known as template files, as a mix of both HTML code and PHP logic. As web application developers, we need to alter the behavior of existing themes, in order to limit the logic inside templates and use plugins to parse the necessary model data to views. Structure of a WordPress page layout Typically, posts or pages created in WordPress consist of five common sections. Most of these components will be common across all the pages in the website. In web applications, we also separate the common layout content into separate views to be included inside other views. It's important for us to focus on how we can adapt the layout into web application-specific structure. Let's visualize the common layout of WordPress using the following diagram: Having looked at the structure, it's obvious that Header, Footer, and the Main Contentarea are mandatory even for web applications. However, the Footerand Commentssection will play a less important role in web applications, compared to web pages. Sidebaris important in web applications, even though it won't be used with the same meaning. It can be quite useful as a dynamic widget area. Customizing the application layout Web applications can be categorized as projects and products. A project is something we develop targeting specific requirements of a client. On the other hand, a product is an application created based on the common set of requirements for wide range of users. Therefore, customizations will be required on layouts of your product based on different clients. WordPress themes make it simple to customize the layout and features using child themes. We can make the necessary modifications in the child theme while keeping the core layout in the parent theme. This will prevent any code duplications in customizing layouts. Also, the ability to switch themes is a powerful feature that eases the layout customization. The role of the admin dashboard The administration interface of an application plays one of the most important roles behind the scenes. WordPress offers one of the most powerful and easy-to-access admin areas amongst other competitive frameworks. Most of you should be familiar with using admin area for CMS functionalities. However, we will have to understand how each component in the admin area suits the development of web applications. The admin dashboard Dashboard is the location where all the users get redirected, once logged into admin area. Usually, it contains dynamic widget areas with the most important data of your application. Dashboard can play a major role in web applications, compared to blogging or CMS functionality. The dashboard contains a set of default widgets that are mainly focused on main WordPress features such as posts, pages, and comments. In web applications, we can remove the existing widgets related to CMS and add application-specific widgets to create a powerful dashboard. WordPress offers a well-defined API to create a custom admin dashboard widgets and hence we can create a very powerful dashboard using custom widgets for custom requirements in web applications. Posts and pages Posts in WordPress are built for creating content such as articles and tutorials. In web applications, posts will be the most important section to create different types of data. Often, we will choose custom post types instead of normal posts for building advanced data creation sections. On the other hand, pages are typically used to provide static content of the site. Usually, we have static pages such as About Us, Contact Us, Services, and so on. Users User management is a must use section for any kind of web application. User roles, capabilities and profiles will be managed in this section by the authorized users. Appearance Themes and application configurations will be managed in this section. Widgets and theme options will be the important sections related to web applications. Generally, widgets are used in sidebars of WordPress sites to display information such as recent members, comments, posts, and so on. However, in web applications, widgets can play a much bigger role as we can use widgets to split main template into multiple sections. Also, these types of widgetized areas become handy in applications where majority of features are implemented with AJAX. The theme options panel can be used as the general settings panel of web applications where we define the settings related to templates and generic site-specific configurations. Settings This section involves general application settings. Most of the prebuilt items in this section are suited for blogs and websites. We can customize this section to add new configuration areas related to our plugins, used in web application development. There are some other sections such as links, pages, and comments, which will not be used frequently in complex web application development. The ability to add new sections is one of the key reasons for its flexibility. The role of plugins In normal circumstances, WordPress developers use functions that involve application logic scattered across theme files and plugins. Even some of the developers change the core files of WordPress. Altering WordPress core files, third-party theme or plugin files is considered a bad practice since we lose all the modifications on version upgrades and it may break the compatibility of other parts of WordPress. In web applications, we need to be much more organized. In the Role of WordPress theme section, we discussed the purpose of having a theme for web applications. Plugins will be and should be used to provide the main logic and content of your application. The plugins architecture is a powerful way to add or remove features without affecting the core. Also, we have the ability to separate independent modules into their own plugins, making it easier to maintain. On top of this, plugins have the ability to extend other plugins. Since there are over 40,000 free plugins and large number of premium plugins, sometimes you don't have to develop anything for WordPress applications. You can just use number of plugins and integrate them properly to build advanced applications. The role of widgets The official documentation of WordPress refers to widgets as a component that adds content and features to your sidebar. In a typical blogging or CMS user's perspective, it's a completely valid statement. Actually, the widgets offer more in web applications by going beyond the content that populates sidebars. Modern WordPress themes provides wide range of built-in widgets for advanced functionality, making it much more easier to build applications. The following screenshot shows a typical widgetized sidebar of a website: We can use dynamic widgetized areas to include complex components as widgets, making it easy to add or remove features without changing source code. The following screenshot shows a sample dynamic widgetized area. We can use the same technique for developing applications with WordPress. Throughout these sections, we covered the main components of WordPress and how they fit into the actual web application development. Now, we have a good understanding of the components in order to plan our application developed throughout this article. A development plan for the forum management application In this article, our main goal is to learn how we can build full stack web applications using built-in WordPress features. Therefore, I thought of building a complete application, explaining each and every aspect of web development. We will develop an online forum management system for creating public forums or managing support forum for a specific product or service. This application can be considered as a mini version of a powerful forum system like bbPress. We will be starting the development of this application. Planning is a crucial task in web development, in which we will save a lot of time and avoid potential risks in the long run. First, we need to get a basic idea about the goal of this application, features and functionalities, and the structure of components to see how it fits into WordPress. Application goals and target audience Anyone who are using Internet on day to day basis knows the importance of online discussion boards, also known as forums. These forums allows us to participate in a large community and discuss common matters, either related to a specific subject or a product. The application developed throughout is intended to provide simple and flexible forum management application using a WordPress plugin with the goals of: Learning to develop a forum application Learning to use features of various online forums Learning to manage a forum for your product or service This application will be targeted towards all the people who have participated in an online forum or used a support system of a product they purchased. I believe that both output of this application and the contents will be ideal for the PHP developers who want to jump into WordPress application development. Summary Our main goal was to find how WordPress fits into web application development. We started this articleby identifying the CMS functionalities of WordPress. We explored the features and functionalities of popular full stack frameworks and compared them with the existing functionalities of WordPress. Then, we looked at the existing components and features of WordPress and how each of those components fit into a real-world web application. We also planned the forum management application requirements and identified the limitations in using WordPress for web applications. Finally, we converted the default interface into a question-answer interface in a rapid process using existing functionalities, without interrupting the default behavior of WordPress and themes. By now, you should be able to decide whether to choose WordPress for your web application, visualize how your requirements fits into components of WordPress, and identify and minimize the limitations. Resources for Article: Further resources on this subject: Creating Your Own Theme—A Wordpress Tutorial [article] Introduction to a WordPress application's frontend [article] Wordpress: Buddypress Courseware [article]
Read more
  • 0
  • 0
  • 42133

article-image-data-science-vs-machine-learning-understanding-the-difference-and-what-it-means-today
Richard Gall
02 Sep 2019
8 min read
Save for later

Data science vs. machine learning: understanding the difference and what it means today

Richard Gall
02 Sep 2019
8 min read
One of the things that I really love about the tech industry is how often different terms - buzzwords especially - can cause confusion. It isn’t hard to see this in the wild. Quora is replete with confused people asking about the difference between a ‘developer’ and an ‘engineer’ and how ‘infrastructure’ is different from ‘architecture'. One of the biggest points of confusion is the difference between data science and machine learning. Both terms refer to different but related domains - given their popularity it isn’t hard to see how some people might be a little perplexed. This might seem like a purely semantic problem, but in the context of people’s careers, as they make decisions about the resources they use and the courses they pay for, the distinction becomes much more important. Indeed, it can be perplexing for developers thinking about their career - with machine learning engineer starting to appear across job boards, it’s not always clear where that role begins and ‘data scientist’ begins. Tl;dr: To put it simply - and if you can’t be bothered to read further - data science is a discipline or job role that’s all about answering business questions through data. Machine learning, meanwhile, is a technique that can be used to analyze or organize data. So, data scientists might well use machine learning to find something out, but it would only be one aspect of their job. But what are the implications of this distinction between machine learning and data science? What can the relationship between the two terms tell us about how technology trends evolve? And how can it help us better understand them both? Read next: 9 data science myths debunked What’s causing confusion about the difference between machine learning and data science? The data science v machine learning confusion comes from the fact that both terms have a significant grip on the collective imagination of the tech and business world. Back in 2012 the Harvard Business Review declared data scientist to be the ‘sexiest job of the 21st century’. This was before the machine learning and artificial intelligence boom, but it’s the point we need to go back to understand how data has shaped the tech industry as we know it today. Data science v machine learning on Google Trends Take a look at this Google trends graph: Both terms broadly received a similar level of interest. ‘Machine learning’ was slightly higher throughout the noughties and a larger gap has emerged more recently. However, despite that, it’s worth looking at the period around 2014 when ‘data science’ managed to eclipse machine learning. Today, that feels remarkable given how machine learning is a term that’s extended out into popular consciousness. It suggests that the HBR article was incredibly timely, identifying the emergence of the field. But more importantly, it’s worth noting that this spike for ‘data science’ comes at the time that both terms surge in popularity. So, although machine learning eventually wins out, ‘data science’ was becoming particularly important at a time when these twin trends were starting to grow. This is interesting, and it’s contrary to what I’d expect. Typically, I’d imagine the more technical term to take precedence over a more conceptual field: a technical trend emerges, for a more abstract concept to gain traction afterwards. But here the concept - the discipline - spikes just at the point before machine learning can properly take off. This suggests that the evolution and growth of machine learning begins with the foundations of data science. This is important. It highlights that the obsession with data science - which might well have seemed somewhat self-indulgent - was, in fact, an integral step for business to properly make sense of what the ‘big data revolution’ (a phrase that sounds eighty years old) meant in practice. Insofar as ‘data science’ is a term that really just refers to a role that’s performed, it’s growth was ultimately evidence of a space being carved out inside modern businesses that gave a domain expert the freedom to explore and invent in the service of business objectives. If that was the baseline, then the continued rise of machine learning feels inevitable. From being contained in computer science departments in academia, and then spreading into business thanks to the emergence of the data scientist job role, we then started to see a whole suite of tools and use cases that were about much more than analytics and insight. Machine learning became a practical tool that had practical applications everywhere. From cybersecurity to mobile applications, from marketing to accounting, machine learning couldn’t be contained within the data science discipline. This wasn’t just a conceptual point - practically speaking, a data scientist simply couldn’t provide support to all the different ways in which business functions wanted to use machine learning. So, the confusion around the relationship between machine learning and data science stems from the fact that the two trends go hand in hand - or at least they used to. To properly understand how they’re different, let’s look at what a data scientist actually does. Read next: Data science for non-techies: How I got started (Part 1) What is data science, exactly? I know you’re not supposed to use Wikipedia as a reference, but the opening sentence in the entry for ‘data science’ is instructive: “Data science is a multi-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data.” The word that deserves your attention is multi-disciplinary as this underlines what makes data science unique and why it stands outside of the more specific taxonomy of machine learning terms. Essentially, it’s a human activity as much as a technical one - it’s about arranging, organizing, interpreting, and communicating data. To a certain extent it shares a common thread of DNA with statistics. But although Nate Silver said that ‘data scientist’ was “a sexed up term for statistician”, I think there are some important distinctions. To do data science well you need to be deeply engaged with how your work integrates with the wider business strategy and processes. The term ‘statistics’ - like ‘machine learning’ - doesn’t quite do this. Indeed, to a certain extent this has made data science a challenging field to work in. It isn’t hard to find evidence that data scientists are trying to leave their jobs, frustrated with how their roles are being used and how they integrate into existing organisational structures. How do data scientists use machine learning? As a data scientist, your job is to answer questions. These are questions like: What might happen if we change the price of a product in this way? What do our customers think of our products? How often do customers purchase products? How are customers using our products? How can we understand the existing market? How might we tackle it? Where could we improve efficiencies in our processes? That’s just a small set. The types of questions data scientists will be tackling will vary depending on the industry, their company - everything. Every data science job is unique. But whatever questions data scientists are asking, it’s likely that at some point they’ll be using machine learning. Whether it’s to analyze customer sentiment (grouping and sorting) or predicting outcomes, a data scientist will have a number of algorithms up their proverbial sleeves ready to tackle whatever the business throws at them. Machine learning beyond data science The machine learning revolution might have started in data science, but it has rapidly expanded far beyond that strict discipline. Indeed, one of the reasons that some people are confused about the relationship between the two concepts is because machine learning is today touching just about everything, like water spilling out of its neat data science container. Machine learning is for everyone Machine learning is being used in everything from mobile apps to cybersecurity. And although data scientists might sometimes play a part in these domains, we’re also seeing subject specific developers and engineers taking more responsibility for how machine learning is used. One of the reasons for this is, as I mentioned earlier, the fact that a data scientist - or even a couple of them - can’t do all the things that a business might want them to when it comes to machine learning. But another is the fact that machine learning is getting easier. You no longer need to be an expert to employ machine learning algorithms - instead, you need to have the confidence and foundational knowledge to use existing machine learning tools and products. This ‘productization’ of machine learning is arguably what’s having the biggest impact on how we understand the topic. It’s even shrinking data science, making it a more specific role. That might sound like data science is less important today than it was in 2014, but it can only be a good thing for data scientists - it means they are being asked to spread themselves so thinly. So, if you've been googling 'data science v machine learning', you now know the answer. The two terms are distinct but they both come out of the 'big data revolution' which we're still living through. Both trends and terms are likely to evolve in the future, but they're certainly not going to disappear - as the data at our disposal grow, making effective use of it is only going to become more important.
Read more
  • 0
  • 0
  • 42098

article-image-how-convert-java-code-into-kotlin
Aanand Shekhar
14 Feb 2018
2 min read
Save for later

How to convert Java code into Kotlin

Aanand Shekhar
14 Feb 2018
2 min read
This Kotlin programming tutorial has been taken from Kotlin Programming Cookbook.  One of the best things about Kotlin is its interoperability with Java. If you're a Java programmer, that should be a reason to start learning alone. If you're using an IntelliJ-based IDE, it's actually incredibly easy to convert Java code to Kotlin. In this step-by-step recipe, you'll find out how to do it. What you need to convert your Java code to Kotlin All you need to follow this recipe is an IntelliJ-based IDE installed, which compiles and runs Kotlin and Java. How to do it... Here are the steps you need to follow to convert a Java file to a Kotlin file: In your IntelliJ IDE, open the Java file that you want to convert to Kotlin. Note that it has a .java extension. Now, in the main menu, click on Code menu and choose the Convert Java File to Kotlin File option. Your Java file will be converted into Kotlin, and the extension will now be .kt.  Here is an example of a Java file: After converting to Kotlin, this is what we have: A Kotlin file can be converted into Java, but it's better if you can avoid it or find an alternative way to do it. If you have to absolutely convert your Kotlin code to Java, click on Tools | Kotlin | Show Kotlin Bytecode in the menu: After clicking on Show Kotlin Bytecode, a window will open with the title Kotlin Bytecode: Click on Decompile and a .java file will be generated, containing a decompiled Java bytecode from Kotlin code: Yes, it has a lot of unnecessary code that was not present in the original Java code, but that is the case with decompiled bytecode. At the moment, this is the only way to convert Kotlin code to Java. Copy the decompiled file into a .java file and remove the unnecessary code. How it works Kotlin is a statically-typed programming language that works on Java Virtual Machine and compiles into JVM compatible bytecode. This is the reason we can convert Java code to Kotlin and mix Java and Kotlin code together.  This is also the reason why you can, in a way, get Java code back from Kotlin (although the output is not completely desired).
Read more
  • 0
  • 0
  • 42088
article-image-angular-8-0-releases-with-major-updates-to-framework-angular-material-and-the-cli
Sugandha Lahoti
29 May 2019
4 min read
Save for later

Angular 8.0 releases with major updates to framework, Angular Material, and the CLI

Sugandha Lahoti
29 May 2019
4 min read
Angular 8.0 was released yesterday as a major version of the popular framework for building web, mobile, and desktop applications. This release spans across the framework, Angular Material, and the CLI.  Angular 8.0 improves application startup time on modern browsers, provides new APIs for tapping into the CLI, and aligns Angular to the ecosystem and more web standards. The team behind Angular has released a new Deprecation Guide. Public APIs will now support features for N+2 releases. This means that a feature that is deprecated in 8.1 will keep working in the following two major releases (9 and 10). The team will continue to maintain Semantic Versioning and a high degree of stability even across major versions. Angular 8.0 comes with Differential Loading by Default Differential loading is a process by which the browser chooses between modern or legacy JavaScript based on its own capabilities. The CLI looks at the target JS level in a user’s tsconfig.json form ng-update to determine whether or not to take advantage of Differential Loading. When target is set to es2015, CLI generates and label two bundles. At runtime, the browser uses attributes on the script tag to load the right bundle. <script type="module" src="…"> for Modern JS <script nomodule src="…"> for Legacy JS Angular’s Route Configurations now use Dynamic Imports Previously, lazily loading parts of an application using the router was accomplished by using the loadChildren key in the route configuration. The previous syntax was custom to Angular and built into its toolchain. With version 8, it is migrated to the industry standard dynamic imports. {path: `/admin`, loadChildren: () => import(`./admin/admin.module`).then(m => m.AdminModule)} This will improve the support from editors like VSCode and WebStorm who will now be able to understand and validate these imports. Angular 8.0 CLI updates Workspace APIs in the CLI Previously developers using Schematics had to manually open and modify their angular.json to make changes to the workspace configuration. Angular 8.0 has a new Workspace API to make it easier to read and modify this file. The workspaces API provides an abstraction of the underlying storage format of the workspace and provides support for both reading and writing. Currently, the only supported format is the JSON-based format used by the Angular CLI. New Builder APIs to run build and deployment processes Angular 8.0 has new builder APIs in the CLI that allows developers to tap into ng build, ng test, and ng run to perform processes like build and deployment. There is also an update to AngularFire, which adds a deploy command, making build and deployment to Firebase easier than ever. ng add @angular/fire ng run my-app:deploy Once installed, this deployment command will both build and deploy an application in the way recommended by AngularFire. Support for Web Worker Web workers speed up an application for cpu-intensive processing. Web workers allow developers to offload work to a background thread, such as image or video manipulation. With Angular 8.0, developers can now generate new web workers from the CLI. To add a worker to a project, run: ng generate webWorker my-worker Once added, web worker can be used normally in an application, and the CLI will be able to bundle and code split it correctly. const worker = new Worker(`./my-worker.worker`, { type: `module` }); AngularJS Improvements Unified Angular location service In AngularJS, the $location service handles all routing configuration and navigation, encoding, and decoding of URLS, redirects, and interactions with browser APIs. Angular uses its own underlying Location service for all of these tasks. Angular 8.0 now provides a LocationUpgradeModule that enables a unified location service that shifts responsibilities from the AngularJS $location service to the Angular Location Service. This should improve the lives of applications using ngUpgrade who need routing in both the AngularJS and Angular part of their application. Improvements to lazy load Angular JS As of Angular version 8, lazy loading code can be accomplished simply by using the dynamic import syntax import('...'). The team behind Angular have documented best practices around lazy loading parts of your AngularJS application from Angular, making it easier to migrate the most commonly used features first, and only loading AngularJS for a subset of your application. These are a select few updates. More information on the Angular Blog. 5 useful Visual Studio Code extensions for Angular developers Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability
Read more
  • 0
  • 0
  • 41953

article-image-ros-architecture-and-concepts
Packt
28 Dec 2016
24 min read
Save for later

ROS Architecture and Concepts

Packt
28 Dec 2016
24 min read
In this article by Anil Mahtani, Luis Sánchez, Enrique Fernández, and Aaron Martinez, authors of the book Effective Robotics Programming with ROS, Third Edition, you will learn the structure of ROS and the parts it is made up of. Furthermore, you will start to create nodes and packages and use ROS with examples using Turtlesim. The ROS architecture has been designed and divided into three sections or levels of concepts: The Filesystem level The Computation Graph level The Community level (For more resources related to this topic, see here.) The first level is the Filesystem level. In this level, a group of concepts are used to explain how ROS is internally formed, the folder structure, and the minimum number of files that it needs to work. The second level is the Computation Graph level where communication between processes and systems happens. In this section, we will see all the concepts and mechanisms that ROS has to set up systems, handle all the processes, and communicate with more than a single computer, and so on. The third level is the Community level, which comprises a set of tools and concepts to share knowledge, algorithms, and code between developers. This level is of great importance; as with most open source software projects, having a strong community not only improves the ability of newcomers to understand the intricacies of the software as well as solve the most common issues, it is also the main force driving its growth. Understanding the ROS Filesystem level The ROS Filesystem is one of the strangest concepts to grasp when starting to develop projects in ROS, but with time and patience, the reader will easily become familiar with it and realize its value for managing projects and its dependencies. The main goal of the ROS Filesystem is to centralize the build process of a project while at the same time provide enough flexibility and tooling to decentralize its dependencies. Similar to an operating system, an ROS program is divided into folders, and these folders have files that describe their functionalities: Packages: Packages form the atomic level of ROS. A package has the minimum structure and content to create a program within ROS. It may have ROS runtime processes (nodes), configuration files, and so on. Package manifests: Package manifests provide information about a package, licenses, dependencies, compilation flags, and so on. A package manifest is managed with a file called package.xml. Metapackages: When you want to aggregate several packages in a group, you will use metapackages. In ROS Fuerte, this form for ordering packages was called Stacks. To maintain the simplicity of ROS, the stacks were removed, and now, metapackages make up this function. In ROS, there exists a lot of these metapackages; for example, the navigation stack. Metapackage manifests: Metapackage manifests (package.xml) are similar to a normal package, but with an export tag in XML. It also has certain restrictions in its structure. Message (msg) types: A message is the information that a process sends to other processes. ROS has a lot of standard types of messages. Message descriptions are stored in my_package/msg/MyMessageType.msg. Service (srv) types: Service descriptions, stored in my_package/srv/MyServiceType.srv, define the request and response data structures for services provided by each process in ROS. In the following screenshot, you can see the content of the turtlesim package. What you see is a series of files and folders with code, images, launch files, services, and messages. Keep in mind that the screenshot was edited to show a short list of files; the real package has more: The workspace In general terms, the workspace is a folder which contains packages, those packages contain our source files and the environment or workspace provides us with a way to compile those packages. It is useful when you want to compile various packages at the same time and it is a good way to centralize all of our developments. A typical workspace is shown in the following screenshot. Each folder is a different space with a different role: The source space: In the source space (the src folder), you put your packages, projects, clone packages, and so on. One of the most important files in this space is CMakeLists.txt. The src folder has this file because it is invoked by cmake when you configure the packages in the workspace. This file is created with the catkin_init_workspace command. The build space: In the build folder, cmake and catkin keep the cache information, configuration, and other intermediate files for our packages and projects. Development (devel) space: The devel folder is used to keep the compiled programs. This is used to test the programs without the installation step. Once the programs are tested, you can install or export the package to share with other developers. You have two options with regard to building packages with catkin. The first one is to use the standard CMake workflow. With this, you can compile one package at a time, as shown in the following commands: $ cmakepackageToBuild/ $ make If you want to compile all your packages, you can use the catkin_make command line, as shown in the following commands: $ cd workspace $ catkin_make Both commands build the executable in the build space directory configured in ROS. Another interesting feature of ROS is its overlays. When you are working with a package of ROS, for example, turtlesim, you can do it with the installed version, or you can download the source file and compile it to use your modified version. ROS permits you to use your version of this package instead of the installed version. This is very useful information if you are working on an upgrade of an installed package. Packages Usually, when we talk about packages, we refer to a typical structure of files and folders. This structure looks as follows: include/package_name/: This directory includes the headers of the libraries that you would need. msg/: If you develop nonstandard messages, put them here. scripts/: These are executable scripts that can be in Bash, Python, or any other scripting language. src/: This is where the source files of your programs are present. You can create a folder for nodes and nodelets or organize it as you want. srv/: This represents the service (srv) types. CMakeLists.txt: This is the CMake build file. package.xml: This is the package manifest. To create, modify, or work with packages, ROS gives us tools for assistance, some of which are as follows: rospack: This command is used to get information or find packages in the system. catkin_create_pkg: This command is used when you want to create a new package. catkin_make: This command is used to compile a workspace. rosdep: This command installs the system dependencies of a package. rqt_dep: This command is used to see the package dependencies as a graph. If you want to see the package dependencies as a graph, you will find a plugin called package graph in rqt. Select a package and see the dependencies. To move between packages and their folders and files, ROS gives us a very useful package called rosbash, which provides commands that are very similar to Linux commands. The following are a few examples: roscd: This command helps us change the directory. This is similar to the cd command in Linux. rosed: This command is used to edit a file. roscp: This command is used to copy a file from a package. rosd: This command lists the directories of a package. rosls: This command lists the files from a package. This is similar to the ls command in Linux. Every package must contain a package.xml file, as it is used to specify information about the package. If you find this file inside a folder, it is very likely that this folder is a package or a metapackage. If you open the package.xml file, you will see information about the name of the package, dependencies, and so on. All of this is to make the installation and the distribution of these packages easy. Two typical tags that are used in the package.xml file are <build_depend> and <run _depend>. The <build_depend> tag shows which packages must be installed before installing the current package. This is because the new package might use functionality contained in another package. The <run_depend> tag shows the packages that are necessary to run the code of the package. The following screenshot is an example of the package.xml file: Metapackages As we have shown earlier, metapackages are special packages with only one file inside; this file is package.xml. This package does not have other files, such as code, includes, and so on. Metapackages are used to refer to others packages that are normally grouped following a feature-like functionality, for example, navigation stack, ros_tutorials, and so on. You can convert your stacks and packages from ROS Fuerte to Kinetic and catkin using certain rules for migration. These rules can be found at http://wiki.ros.org/catkin/migrating_from_rosbuild. In the following screenshot, you can see the content from the package.xml file in the ros_tutorialsmetapackage. You can see the <export> tag and the <run_depend> tag. These are necessary in the package manifest, which is also shown in the following screenshot: If you want to locate the ros_tutorialsmetapackage, you can use the following command: $ rosstack find ros_tutorials The output will be a path, such as /opt/ros/kinetic/share/ros_tutorials. To see the code inside, you can use the following command line: $ vim /opt/ros/kinetic/ros_tutorials/package.xml Remember that Kinetic uses metapackages, not stacks, but the rosstack find command-line tool is also capable of finding metapackages. Messages ROS uses a simplified message description language to describe the data values that ROS nodes publish. With this description, ROS can generate the right source code for these types of messages in several programming languages. ROS has a lot of messages predefined, but if you develop a new message, it will be in the msg/ folder of your package. Inside that folder, certain files with the .msg extension define the messages. A message must have two main parts: fields and constants. Fields define the type of data to be transmitted in the message, for example, int32, float32, and string, or new types that you have created earlier, such as type1 and type2. Constants define the name of the fields. An example of an msg file is as follows: int32 id float32vel string name In ROS, you can find a lot of standard types to use in messages, as shown in the following table list: Primitive type Serialization C++ Python bool (1) unsigned 8-bit int uint8_t(2) bool int8 signed 8-bit int int8_t int uint8 unsigned 8-bit int uint8_t int(3) int16 signed 16-bit int int16_t int uint16 unsigned 16-bit int uint16_t int int32 signed 32-bit int int32_t int uint32 unsigned 32-bit int uint32_t int int64 signed 64-bit int int64_t long uint64 unsigned 64-bit int uint64_t long float32 32-bit IEEE float float float float64 64-bit IEEE float double float string ascii string (4) std::string string time secs/nsecs signed 32-bit ints ros::Time rospy.Time duration secs/nsecs signed 32-bit ints ros::Duration rospy.Duration A special type in ROS is the header type. This is used to add the time, frame, and sequence number. This permits you to have the messages numbered, to see who is sending the message, and to have more functions that are transparent for the user and that ROS is handling. The header type contains the following fields: uint32seq time stamp string frame_id You can see the structure using the following command: $ rosmsg show std_msgs/Header Thanks to the header type, it is possible to record the timestamp and frame of what is happening with the robot. ROS provides certain tools to work with messages. The rosmsg tool prints out the message definition information and can find the source files that use a message type. In upcoming sections, we will see how to create messages with the right tools. Services ROS uses a simplified service description language to describe ROS service types. This builds directly upon the ROS msg format to enable request/response communication between nodes. Service descriptions are stored in .srv files in the srv/ subdirectory of a package. To call a service, you need to use the package name, along with the service name; for example, you will refer to the sample_package1/srv/sample1.srv file as sample_package1/sample1. Several tools exist to perform operations on services. The rossrv tool prints out the service descriptions and packages that contain the .srv files, and finds source files that use a service type. If you want to create a service, ROS can help you with the service generator. These tools generate code from an initial specification of the service. You only need to add the gensrv() line to your CMakeLists.txt file. In upcoming sections, you will learn how to create your own services. Understanding the ROS Computation Graph level ROS creates a network where all the processes are connected. Any node in the system can access this network, interact with other nodes, see the information that they are sending, and transmit data to the network: The basic concepts in this level are nodes, the master, Parameter Server, messages, services, topics, and bags, all of which provide data to the graph in different ways and are explained in the following list: Nodes: Nodes are processes where computation is done. If you want to have a process that can interact with other nodes, you need to create a node with this process to connect it to the ROS network. Usually, a system will have many nodes to control different functions. You will see that it is better to have many nodes that provide only a single functionality, rather than have a large node that makes everything in the system. Nodes are written with an ROS client library, for example, roscpp or rospy. The master: The master provides the registration of names and the lookup service to the rest of the nodes. It also sets up connections between the nodes. If you don't have it in your system, you can't communicate with nodes, services, messages, and others. In a distributed system, you will have the master in one computer, and you can execute nodes in this or other computers. Parameter Server: Parameter Server gives us the possibility of using keys to store data in a central location. With this parameter, it is possible to configure nodes while it's running or to change the working parameters of a node. Messages: Nodes communicate with each other through messages. A message contains data that provides information to other nodes. ROS has many types of messages, and you can also develop your own type of message using standard message types. Topics: Each message must have a name to be routed by the ROS network. When a node is sending data, we say that the node is publishing a topic. Nodes can receive topics from other nodes by simply subscribing to the topic. A node can subscribe to a topic even if there aren't any other nodes publishing to this specific topic. This allows us to decouple the production from the consumption. It's important that topic names are unique to avoid problems and confusion between topics with the same name. Services: When you publish topics, you are sending data in a many-to-many fashion, but when you need a request or an answer from a node, you can't do it with topics. Services give us the possibility of interacting with nodes. Also, services must have a unique name. When a node has a service, all the nodes can communicate with it, thanks to ROS client libraries. Bags: Bags are a format to save and play back the ROS message data. Bags are an important mechanism to store data, such as sensor data, that can be difficult to collect but is necessary to develop and test algorithms. You will use bags a lot while working with complex robots. In the following diagram, you can see the graphic representation of this level. It represents a real robot working in real conditions. In the graph, you can see the nodes, the topics, which node is subscribed to a topic, and so on. This graph does not represent messages, bags, Parameter Server, and services. It is necessary for other tools to see a graphic representation of them. The tool used to create the graph is rqt_graph. These concepts are implemented in the ros_comm repository. Nodes and nodelets Nodes are executable that can communicate with other processes using topics, services, or the Parameter Server. Using nodes in ROS provides us with fault tolerance and separates the code and functionalities, making the system simpler. ROS has another type of node called nodelets. These special nodes are designed to run multiple nodes in a single process, with each nodelet being a thread (light process). This way, we avoid using the ROS network among them, but permit communication with other nodes. With that, nodes can communicate more efficiently, without overloading the network. Nodelets are especially useful for camera systems and 3D sensors, where the volume of data transferred is very high. A node must have a unique name in the system. This name is used to permit the node to communicate with another node using its name without ambiguity. A node can be written using different libraries, such as roscpp and rospy; roscpp is for C++ and rospy is for Python. Throughout we will use roscpp. ROS has tools to handle nodes and give us information about it, such as rosnode. The rosnode tool is a command-line tool used to display information about nodes, such as listing the currently running nodes. The supported commands are as follows: rosnodeinfo NODE: This prints information about a node rosnodekill NODE: This kills a running node or sends a given signal rosnodelist: This lists the active nodes rosnode machine hostname: This lists the nodes running on a particular machine or lists machines rosnode ping NODE: This tests the connectivity to the node rosnode cleanup: This purges the registration information from unreachable nodes A powerful feature of ROS nodes is the possibility of changing parameters while you start the node. This feature gives us the power to change the node name, topic names, and parameter names. We use this to reconfigure the node without recompiling the code so that we can use the node in different scenes. An example of changing a topic name is as follows: $ rosrun book_tutorials tutorialX topic1:=/level1/topic1 This command will change the topic name topic1 to /level1/topic1. To change parameters in the node, you can do something similar to changing the topic name. For this, you only need to add an underscore (_) to the parameter name; for example: $ rosrun book_tutorials tutorialX _param:=9.0 The preceding command will set param to the float number 9.0. Bear in mind that you cannot use names that are reserved by the system. They are as follows: __name: This is a special, reserved keyword for the name of the node __log: This is a reserved keyword that designates the location where the node's log file should be written __ip and __hostname: These are substitutes for ROS_IP and ROS_HOSTNAME __master: This is a substitute for ROS_MASTER_URI __ns: This is a substitute for ROS_NAMESPACE Topics Topics are buses used by nodes to transmit data. Topics can be transmitted without a direct connection between nodes, which means that the production and consumption of data is decoupled. A topic can have various subscribers and can also have various publishers, but you should be careful when publishing the same topic with different nodes as it can create conflicts. Each topic is strongly typed by the ROS message type used to publish it, and nodes can only receive messages from a matching type. A node can subscribe to a topic only if it has the same message type. The topics in ROS can be transmitted using TCP/IP and UDP. The TCP/IP-based transport is known as TCPROS and uses the persistent TCP/IP connection. This is the default transport used in ROS. The UDP-based transport is known as UDPROS and is a low-latency, lossy transport. So, it is best suited to tasks such as teleoperation. ROS has a tool to work with topics called rostopic. It is a command-line tool that gives us information about the topic or publishes data directly on the network. This tool has the following parameters: rostopicbw /topic: This displays the bandwidth used by the topic. rostopic echo /topic: This prints messages to the screen. rostopic find message_type: This finds topics by their type. rostopichz /topic: This displays the publishing rate of the topic. rostopic info /topic: This prints information about the topic, such as its message type, publishers, and subscribers. rostopic list: This prints information about active topics. rostopic pub /topic type args: This publishes data to the topic. It allows us to create and publish data in whatever topic we want, directly from the command line. rostopic type /topic: This prints the topic type, that is, the type of message it publishes. We will learn to use this command-line tool in upcoming sections. Services When you need to communicate with nodes and receive a reply, in an RPC fashion, you cannot do it with topics; you need to do it with services. Services are developed by the user, and standard services don't exist for nodes. The files with the source code of the services are stored in the srv folder. Similar to topics, services have an associated service type that is the package resource name of the .srv file. As with other ROS filesystem-based types, the service type is the package name and the name of the .srv file. ROS has two command-line tools to work with services: rossrv and rosservice. With rossrv, we can see information about the services' data structure, and it has exactly the same usage as rosmsg. With rosservice, we can list and query services. The supported commands are as follows: rosservice call /service args: This calls the service with the arguments provided rosservice find msg-type: This finds services by service type rosservice info /service: This prints information about the service rosservice list: This lists the active services rosservice type /service: This prints the service type rosserviceuri /service: This prints the ROSRPC URI service Messages A node publishes information using messages which are linked to topics. The message has a simple structure that uses standard types or types developed by the user. Message types use the following standard ROS naming convention; the name of the package, then /, and then the name of the .msg file. For example, std_msgs/ msg/String.msg has the std_msgs/String message type. ROS has the rosmsg command-line tool to get information about messages. The accepted parameters are as follows: rosmsg show: This displays the fields of a message rosmsg list: This lists all messages rosmsg package: This lists all of the messages in a package rosmsg packages: This lists all of the packages that have the message rosmsg users: This searches for code files that use the message type rosmsgmd5: This displays the MD5 sum of a message Bags A bag is a file created by ROS with the .bag format to save all of the information of the messages, topics, services, and others. You can use this data later to visualize what has happened; you can play, stop, rewind, and perform other operations with it. The bag file can be reproduced in ROS just as a real session can, sending the topics at the same time with the same data. Normally, we use this functionality to debug our algorithms. To use bag files, we have the following tools in ROS: rosbag: This is used to record, play, and perform other operations rqt_bag: This is used to visualize data in a graphic environment rostopic: This helps us see the topics sent to the nodes The ROS master The ROS master provides naming and registration services to the rest of the nodes in the ROS system. It tracks publishers and subscribers to topics as well as services. The role of the master is to enable individual ROS nodes to locate one another. Once these nodes have located each other, they communicate with each other in a peer-to-peer fashion. You can see in a graphic example the steps performed in ROS to advertise a topic, subscribe to a topic, and publish a message, in the following diagram: The master also provides Parameter Server. The master is most commonly run using the roscore command, which loads the ROS master, along with other essential components. Parameter Server Parameter Server is a shared, multivariable dictionary that is accessible via a network. Nodes use this server to store and retrieve parameters at runtime. Parameter Server is implemented using XMLRPC and runs inside the ROS master, which means that its API is accessible via normal XMLRPC libraries. XMLRPC is a Remote Procedure Call (RPC) protocol that uses XML to encode its calls and HTTP as a transport mechanism. Parameter Server uses XMLRPC data types for parameter values, which include the following: 32-bit integers Booleans Strings Doubles ISO8601 dates Lists Base64-encoded binary data ROS has the rosparam tool to work with Parameter Server. The supported parameters are as follows: rosparam list: This lists all the parameters in the server rosparam get parameter: This gets the value of a parameter rosparam set parameter value: This sets the value of a parameter rosparam delete parameter: This deletes a parameter rosparam dump file: This saves Parameter Server to a file rosparam load file: This loads a file (with parameters) on Parameter Server Understanding the ROS Community level The ROS Community level concepts are the ROS resources that enable separate communities to exchange software and knowledge. These resources include the following: Distributions: ROS distributions are collections of versioned metapackages that you can install. ROS distributions play a similar role to Linux distributions. They make it easier to install a collection of software, and they also maintain consistent versions across a set of software. Repositories: ROS relies on a federated network of code repositories, where different institutions can develop and release their own robot software components. The ROS Wiki: The ROS Wiki is the main forum for documenting information about ROS. Anyone can sign up for an account, contribute their own documentation, provide corrections or updates, write tutorials, and more. Bug ticket system: If you find a problem or want to propose a new feature, ROS has this resource to do it. Mailing lists: The ROS user-mailing list is the primary communication channel about new updates to ROS as well as a forum to ask questions about the ROS software. ROS Answers: Users can ask questions on forums using this resource. Blog: You can find regular updates, photos, and news at http://www.ros.org/news. Summary This article provided you with general information about the ROS architecture and how it works. You saw certain concepts and tools of how to interact with nodes, topics, and services. Remember that if you have queries about something, you can use the official resources of ROS from http://www.ros.org. Additionally, you can ask the ROS Community questions at http://answers.ros.org. Resources for Article: Further resources on this subject: The ROS Filesystem levels [article] Using ROS with UAVs [article] Face Detection and Tracking Using ROS, Open-CV and Dynamixel Servos [article]
Read more
  • 0
  • 0
  • 41869
Modal Close icon
Modal Close icon