Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Networking

109 Articles
article-image-directory-services
Packt
29 Sep 2016
11 min read
Save for later

Directory Services

Packt
29 Sep 2016
11 min read
In this article by Gregory Boyce, author Linux Networking Cookbook, we will focus on getting you started by Configuring Samba as an Active Directory compatible directory service and Joining a Linux box to the domain. (For more resources related to this topic, see here.) If you have worked in corporate environments, then you are probably familiar with a directory service such as Active Directory. What you may not realize is that Samba, originally created to be an open source implementation of Windows file sharing (SMB/CIFS), can now operate as an Active Directory compatible directory service. It can even act as a Backup Domain Controller (BDC) in an Active Directory domain. In this article, we will configure Samba to centralize authentication for your network services. We will also configure a Linux client to leverage it for authentication and set up a RADIUS server, which uses the directory server for authentication. Configuring Samba as an Active Directory compatible directory service As of Samba 4.0, Samba has the ability to act as a primary domain controller (PDC) in a manner that is compatible with Active Directory. How to do it… Installing on Ubuntu 14.04: Configure your system with a static IP address and update /etc/hosts to point to that IP address rather than localhost. Make sure that your time is kept up to date by installing an NTP client: sudo apt-get install ntp Pre-emptively disable smbd/nmbd from running automatically: sudo bash -c 'echo "manual" > /etc/init/nmbd.override' sudo bash –c 'echo "manual" > /etc/init/smbd.override' Install Samba and smbclient: sudo apt-get install samba smbclient Remove stock smb.conf: sudo rm /etc/samba/smb.conf Provision the domain:sudo samba-tool domain provision --realm ad.example.org --domain example --use-rfc2307 --option="interfaces=lo eth1" --option="bind interfaces only=yes" --dns-backend BIND9_DLZ Save the randomly generated admin password. Symlink the AD krb5.conf to /etc: sudo ln -sf /var/lib/samba/private/krb5.conf /etc/krb5.conf Edit /etc/bind/named.conf.local to allow Samba to publish data: dlz "AD DNS Zone" { # For BIND 9.9.0 database "dlopen /usr/lib/x86_64-linux-gnu/samba/bind9/dlz_bind9_9.so"; }; Edit /etc/bind/named.conf.options to use the Kerberos keytab within the options stanza: tkey-gssapi-keytab "/var/lib/samba/private/dns.keytab"; Modify your zone record to allow updates from Samba: zone "example.org" { type master; notify no; file "/var/lib/bind/example.org.db"; update-policy { grant AD.EXAMPLE.ORG ms-self * A AAAA; grant Administrator@AD.EXAMPLE.ORG wildcard * A AAAA SRV CNAME; grant SERVER$@ad.EXAMPLE.ORG wildcard * A AAAA SRV CNAME; grant DDNS wildcard * A AAAA SRV CNAME; }; }; Modify /etc/apparmor.d/usr.sbin.named to allow bind9 access to a few additional resources within the /usr/sbin/named stanza: /var/lib/samba/private/dns/** rw, /var/lib/samba/private/named.conf r, /var/lib/samba/private/named.conf.update r, /var/lib/samba/private/dns.keytab rk, /var/lib/samba/private/krb5.conf r, /var/tmp/* rw, /dev/urandom rw, Reload the apparmor configuration: sudo service apparmor restart Restart bind9: sudo service bind9 restart Restart the samba service: sudo service apparmor restart Installing on CentOS 7: Unfortunately, setting up a domain controller on CentOS 7 is not possible using the default packages provided by the distribution. This is due to Samba utilizing the Heimdal implementation of Kerberos while Red Hat, CentOS, and Fedora using the MIT Kerberos 5 implementation. How it works… The process for provisioning Samba to act as an Active Directory compatible domain is deceptively easy given all that is happening on the backend. Let us look at some of the expectation and see how we are going to meet them as well as what is happening behind the scenes. Active Directory requirements Successfully running an Active Directory Forest has a number of requirements need to be in place: Synchronized time: AD uses Kerberos for authentication, which can be very sensitive to time skews. In our case, we are going to use ntpd, but other options including openntpd or chrony are also available. The ability to manage DNS records: AD automatically generates a number of DNS records, including SRV records that tell clients of the domain how to locate the domain controller itself. A static IP address: Due to a number of pieces of the AD functionality being very dependent on the specific IP address of your domain controller, it is recommended that you use a static IP address. A static DHCP lease may work as long as you are certain the IP address will not change. A rogue DHCP server on the network for example may cause difficulties. Selecting a realm and domain name The Samba team has published some very useful information regarding the proper naming of your realm and your domain along with a link to Microsoft's best practices on the subject. It may be found on: https://wiki.samba.org/index.php/Active_Directory_Naming_FAQ. The short version is that your domain should be globally unique while the realm should be unique within the layer 2 broadcast domain of your network. Preferably, the domain should be a subdomain of a registered domain owned by you. This ensures that you can buy SSL certificates if necessary and you will not experience conflicts with outside resources. Samba-tool will default to using the first part of the domain you specified as the realm, ad from ad.example.org. The Samba group instead recommends using the second part, example in our case, as it is more likely to be locally unique. Using a subdomain of your purchased domain rather than a domain itself makes life easier when splitting internal DNS records, which are managed by your AD instance from the more publicly accessible external names. Using Samba-Tool Samba-tool can work in an automated fashion with command line options, or it can operate in interactive mode. We are going to specify the options that we want to use on the command line: sudo samba-tool domain provision --realm ad.example.org --domain example --use-rfc2307 --option="interfaces=lo eth1" --option="bind interfaces only=yes" --dns-backend BIND9_DLZ The realm and domain options here specify the name for your domain as described above. Since we are going to be supporting Linux systems, we are going to want the AD schema to support RFC2307 settings, which allow for definitions for UID, GID, shell, home directory, and other settings, which Unix systems will require. The pair of options specified on our command-line is used for restricting what interfaces Samba will bind to. While not strictly required, it is a good practice to keep your Samba services bound to the internal interfaces. Finally, samba wants to be able to manage your DNS in order to add systems to the zone automatically. This is handled by a variety of available DNS backends. These include: SAMBA_INTERNAL: This is a built-in method where a samba process acts as a DNS service. This is a good quick option for small networks. BIND9_DLZ: This option allows you to tie your local named/bind9 instance in with your Samba server. It introduces a named plugin for bind versions 9.8.x/9.9.x to support reading host information directly out of the Samba data stores. BIND_FLATFILE: This option is largely deprecated in favor of BIND9_DLZ, but it is still an option if you are running an older version of bind. It causes the Samba services to write out zone files periodically, which Bind may use. Bind configuration Now that Samba is set up to support BIND9_DLZ, we need to configure named to leverage it. There are a few pieces to this support: tkey-gssapi-keytab: This settings in your named options section defines the Kerberos key tab file to use for DNS updates. This allows the Samba server to communicate with the Bind server in order to let it know about zone file changes. dlz setting: This tells bind to load the dynamic module which Samba provides in order to have it read from Samba's data files. Zone updating: In order to be able to update the zone file, you need to switch from an allow-update definition to update-policy, which allows more complex definitions including Kerberos based updates. Apparmor rules changes: Ubuntu uses a Linux Security Module called Apparmor, which allows you to define the allowed actions of a particular executable. Apparmor contains rules restricting the access rights of the named process, but these existing rules do not account for integration with Samba. We need to adjust the existing rules to allow named to access some additional required resources. Joining a Linux box to the domain In order to participate in an AD style domain, you must have the machine joined to the domain using Administrator credentials. This will create the machine's account within the database, and provide credentials to the system for querying the ldap server. How to do it… Install Samba, heimdal-clients, and winbind: sudo apt-get install winbind Populate /etc/samba/smb.conf: [global] workgroup = EXAMPLE realm = ad.example.org security = ads idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%U template shell = /bin/bash winbind use default domain = yes Join the system to the domain: sudo net ads join -U Administrator Configure the system to use winbind for account information in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind How it works… Joining a Linux box to an AD domain, you need to utilize winbind that provides a PAM interface for interacting with Windows RPC calls for user authentication. Winbind requires that you set up your smb.conf file, and then join the domain before it functions. Nsswitch.conf controls how glibc attempts to look up particular types of information. In our case, we are modifying them to talk to winbind for user and group information. Most of the actual logic is in the smb.conf file itself, so let us look: Define the AD Domain we're working with, including both the workgroup/domain and the realm: workgroup = EXAMPLE realm = ad.example.org Now we tell Samba to use Active Directory Services (ADS) security mode: security = ads AD domains use Windows Security IDs (SID) for providing unique user and group identifiers. In order to be compatible with Linux systems, we need to map those SIDs to UIDs and GIDs. Since we're only dealing with a single client for now, we're going to let the local Samba instance map the SIDs to UIDs and GIDs from a range which we provide: idmap uid = 10000-20000 idmap gid = 10000-20000 Some unix utilities such as finger depend on the ability to loop through all of the user/group instances. On a large AD domain, this can be far too many entries so winbind suppresses this capability by default. For now, we're going to want to enable it: winbind enum users = yes winbind enum groups = yes Unless you go through specific steps to populate your AD domain with per-user home directory and shell information, then Winbind will use templates for home directories and shells. We'll want to define these templates in order to avoid the defaults of /home/%D/%U (/home/EXAMPLE/user) and /bin/false: template homedir = /home/%U template shell = /bin/bash The default winbind configuration takes users in the form of username@example.org rather than the more Unix style of user username. Let's override that setting: winbind use default domain = yes Joining a windows box to the domain While not a Linux configuration topic, the most common use for an Active Directory domain is to manage a network of Windows systems. While the overarching topic of managing windows via an AD domain is too large and out of scope for this articl, let's look at how we can join a Windows system to our new domain. How to do it… Click on Start and go to Settings. Click on System. Select About. Select Join a Domain. Type in the name of your domain; ad.example.org in our case. Enter your administrator credentials for the domain. Select a user who will own the system. How it works… When you tell your Windows system to join an AD Domain, it first attempts to find the domain by looking up a series SRV record for the domain, including _ldap._tcp.dc._msdcs.ad.example.org in order to determine what hosts to connect to within the domain for authentication purposes. From there a connection is established. Resources for Article: Further resources on this subject: OpenStack Networking in a Nutshell [article] Zabbix Configuration [article] Supporting hypervisors by OpenNebula [article]
Read more
  • 0
  • 0
  • 11497

Packt
04 Jun 2013
5 min read
Save for later

Quick start – Monitoring hosts and services

Packt
04 Jun 2013
5 min read
(For more resources related to this topic, see here.) Step 1 – Modifying nagios.cfg Nagios knows which configuration files to parse by maintaining a master reference in the nagios.cfg file. For any new configuration file to be recognized by Nagios, either the file or the directory it is in has to be defined in nagios.cfg. For source installations, this file is located at /usr/local/nagios/etc/nagios.cfg. Perform the following steps to modify nagios.cfg: Start by creating two new directories to store our configuration files: cd /usr/local/nagios/etc/objectsmkdir hostsmkdir services Then open nagios.cfg with your preferred text editor to add the new directories: # You can specify individual object config files as shown below:cfg_file=/usr/local/nagios/etc/objects/commands.cfgcfg_file=/usr/local/nagios/etc/objects/contacts.cfgcfg_file=/usr/local/nagios/etc/objects/timeperiods.cfgcfg_file=/usr/local/nagios/etc/objects/templates.cfg# Definitions for monitoring the local (Linux) hostcfg_file=/usr/local/nagios/etc/objects/localhost.cfg Add the following lines to allow all files in our hosts and services directories to be automatically added to the monitoring configuration: cfg_dir=/usr/local/nagios/etc/objects/hostscfg_dir=/usr/local/nagios/etc/objects/services Save the file and close it, it's time to add our first new host! Step 2 – Adding a host A host in Nagios is any machine or device with an IP address or a host name. The following example will demonstrate the creation of a basic host configuration file that we can add to the monitoring configuration: Create a new file at /usr/local/nagios/etc/objects/hosts called test.cfg, and open it in a text editor: define host{host_name testalias testaddress 127.0.0.1use linux-server} There are many more configuration directives that can be specified for a host, but as a best practice, it's best to specify as many of these values in a template as possible. Save the file and close it. You just added your first new host! Step 3 – Adding a service Services in Nagios are processes, applications, metrics, and anything else that can be monitored under the scope of the associated host. The following example creates a basic service definition for the test used, and will be used to start monitoring with a simple HTTP check: Service configurations are created in much the same way that hosts are. Create a new file named test.cfg, at /usr/local/nagios/etc/objects/services, and open it in a text editor: define service{host_name testservice_description HTTPcheck_command check_httpuse generic-service} Services can be applied to a single host, a list of hosts, or even an entire host group, and check_command specified for each of them can be customized to take custom arguments as well. However, for the moment, let's start things simple and keep moving forward. Save the file and close it. Step 4 – Creating and assigning contacts Alerting is an essential part of monitoring infrastructure with Nagios, but it is recommended to minimize or even disable the use of alerts while setting up your monitoring environment. Users who receive too many false alerts will be trained to ignore them. Setting up effective alerting starts with creating appropriate contacts and contact groups for the hosts and services that are being monitored. Contacts also form the basis for host and service permissions in Nagios. A regular-level user in Nagios will only be able to view and submit commands for hosts and services that they are contacts for, unless he/she is granted some level of global access in the cgi.cfg file. The following are the steps to create and assign contacts: Open /usr/local/nagios/etc/objects/contacts.cfg in your preferred text editor. By default, the nagiosadmin contact is already created for you. This account should typically be reserved for the top-level Nagios administrator. For other users, new contacts should be created. Add a new contact definition with your preferred username and e-mail address. define contact{contact_name testuse generic-contactalias Test Useremail test@example.com} Let's also add this contact to the admins contact group, which already exists in the same file: define contactgroup{contactgroup_name adminsalias Nagios Administratorsmembers nagiosadmin,test} Save the file and close it. In order to allow the access of the web interface to the new contacts, they need to be added to the htpasswd.users file using the following command: htpasswd -c /usr/local/nagios/etc/htpasswd.users test Step 5 – Verifying configuration and restarting Nagios All monitoring and event handling is done based on rules defined in the object configuration files, so the monitoring process requires a valid configuration in order to run. Always verify any configuration changes before attempting to restart Nagios, using the following steps. Attempting to restart Nagios with configuration errors will halt all the monitoring on the system. Nagios configurations can be verified on any installation, by running the Nagios binary file with the -v flag, followed by the main nagios.cfg file. On a source installation, this command can be run as follows: /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg If all goes well, you'll see the following message verifying that there are no configuration errors and that Nagios is ready to be restarted: Total Warnings: 0Total Errors: 0 Things look okay. No serious problems were detected during the pre-flight check. Once configuration verification succeeds, Nagios can be restarted with the following command: /etc/init.d/nagios restart Access the web interface to see the new host and service in Nagios! Summary In this article, we saw how to monitor hosts and services using a step-by-step approach. These four steps give an easy way to achieve our goal. Resources for Article : Further resources on this subject: Monitoring CUPS- part2 [Article] Troubleshooting Nagios 3.0 [Article] Notifications and Events in Nagios 3.0- part2 [Article]
Read more
  • 0
  • 0
  • 11417

Packt
27 Aug 2013
6 min read
Save for later

WebSocket – a Handshake!

Packt
27 Aug 2013
6 min read
(For more resources related to this topic, see here.) WebSockets define a persistent two-way communication between web servers and web clients, meaning that both parties can exchange message data at the same time. WebSockets introduce true concurrency, they are optimized for high performance, and result in much more responsive and rich web applications. The following diagram shows a server handshake with multiple clients: For the record, the WebSocket protocol has been standardized by the Internet Engineering Task Force (IETF) and the WebSocket API for web browsers is currently being standardized by the World Wide Web Consortium (W3C)— yes, it's a work in progress. No, you do not need to worry about enormous changes, as the current specification has been published as "proposed standard". Life before WebSocket Before diving into the WebSockets' world, let's have a look at the existing techniques used for bidirectional communication between servers and clients. Polling Web engineers initially dealt with the issue using a technique called polling. Polling is a synchronous method (that is, no concurrency) that performs periodic requests, regardless whether data exists for transmission. The client makes consecutive requests after a specified time interval. Each time, the server responds with the available data or with a proper warning message. Though polling "just works", it is easy to understand that this method is overkill for most situations and extremely resource consuming for modern web apps. Long polling Long polling is a similar technique where, as its name indicates, the client opens a connection and the server keeps the connection active until some data is fetched or a timeout occurs. The client can then start over and perform a sequential request. Long polling is a performance improvement over polling, but the constant requests might slow down the process. Streaming Streaming seemed like the best option for real-time data transmission. When using streaming, the client performs a request and the server keeps the connection open indefinitely, fetching new data when ready. Although this is a big improvement, streaming still includes HTTP headers, which increase file size and cause unnecessary delays. Postback and AJAX The web has been built around the HTTP request-response model. HTTP is a stateless protocol, meaning that the communication between two parts consists of independent pairs of requests and responses. In plain words, the client asks the server for some information, the server responds with the proper HTML document and the page is refreshed (that's actually called a postback). Nothing happens in between, until a new action is performed (such as the click of a button or a selection from a drop-down menu). Any page load is followed by an annoying)(in terms of user experience) flickering effect. It was not until 2005 that the postback flickering was bypassed thanks to Asynchronous JavaScript and XML (AJAX). AJAX is based on the JavaScript's XmlHttpRequest Object and allows asynchronous execution of JavaScript code without interfering with the rest of the user interface. Instead of reloading the whole page, AJAX sends and receives back only a portion of the web page. Imagine you are using Facebook and want to post a comment on your Timeline. You type a status update in the proper text field, hit Enter and...voila! Your comment is automatically published without a single page load. Unless Facebook used AJAX, the browser would need to refresh the whole page in order to display your new status. AJAX, accompanied with popular JavaScript libraries such as jQuery, has strongly improved the end user experience and is widely considered as a must-have attribute for every website. It was only after AJAX that JavaScript became a respectable programming language, instead of being thought of as a necessary evil. But it's still not enough. Long polling is a useful technique that makes it seem like your browser maintains a persistent connection, while the truth is that the client makes continuous calls! This might be extremely resource-intensive, especially in mobile devices, where speed and data size really matter. All of the methods previously described provide real-time bidirectional communication, but have three obvious disadvantages in comparison with WebSockets: They send HTTP headers, making the total file size larger The communication type is half duplex, meaning that each party (client/server) must wait for the other one to finish The web server consumes more resources The postback world seems like a walkie-talkie—you need to wait for the other guy to finish speaking (half-duplex). In the WebSocket world, the participants can speak concurrently (full-duplex)! The web was initially built for displaying text documents, but think how it is used today. We display multimedia content, add location capabilities, accomplish complex tasks and, hence, transmit data different than text. AJAX and browser plugins such as Flash are all great, but a more native way of doing things is required. The way we use the web nowadays bears the need for a holistic new application development framework. Then came HTML5 HTML5 makes a huge, yet justifiable, buzz nowadays as it introduces vital solutions to the problems discussed previously. If you are already familiar with HTML5, feel free to skip this section and move on. HTML5 is a robust framework for developing and designing web applications. HTML5 is not just a new markup or some new styling selectors, neither is it a new programming language. HTML5 stands for a collection of technologies, programming languages and tools, each of which has a discrete role and all of these together accomplish a specific task—that is, to build rich web apps for any kind of device. The main HTML5 pillars include Markup, CSS3, and JavaScript APIs, together. The following diagram shows HTML5 components: Here are the dominant members of the HTML5 family. As this book does not cover the whole set of HTML5, I suggest you visit html5rocks.com and get started with hands-on examples and demos. Markup Structural elements Form elements Attributes Graphics Style sheets Canvas SVG WebGL Multimedia Audio Video Storage Cache Local storage Web SQL Connectivity WebMessaging WebSocket WebWorkers Location Geolocation Although Storage and Connectivity are supposed to be the most advanced topics, you do not need to worry if you are not an experienced web developer. Moreover, managing WebSockets via the HTML5 API is pretty simple to grasp, so take a deep breath and dive in with no fear. Summary This article has helped you understand the basics of WebSocket. It also helped you get an overview of life before WebSocket. It explained Postback and AJAX and then explained the topic of HTML5 Resources for Article: Further resources on this subject: Building HTML5 Pages from Scratch [Article] Basic use of Local Storage [Article] Scalability, Limitations, and Effects [Article]
Read more
  • 0
  • 0
  • 11314

article-image-network-monitoring-essentials
Packt
06 Aug 2013
20 min read
Save for later

Network Monitoring Essentials

Packt
06 Aug 2013
20 min read
(For more resources related to this topic, see here.) Monitoring basics By now, you have already added some devices to your own Orion NPM installation and are ready to dive right in. In all actuality, network monitoring with Orion NPM fundamentally consists of two different actions. The first is Orion NPM polling devices and discovering nodes. The second is an administrator physically logging in to the Orion dashboard and looking at the statistics and node information. All monitoring is performed by viewing the pages available from the various tabs at the top of the dashboard website. By default, the Orion Summary Home page opens directly after logging into Orion NPM's dashboard. This is the page you will find yourself looking at most of the time while managing and monitoring your network with Orion NPM. There are several modules on every page of the dashboard that provide you with several pieces of information. If you are not sure what the content of a module contains or if you want more information about what the module is displaying, click on the HELP button at the top-right corner for a detailed explanation. Map The most prominent module displayed on the main page is the network map on the right-hand side. It is designed to display the "big picture" of your entire network that is monitored by Orion NPM. Questions that are quickly answered by the network map is, "Are some nodes up?", "Are some nodes down?", "Are there any network performance issues (slowness or packet loss) between WAN links?", or "Are there any network performance issues that I should be aware of?" The network map can point you in the right direction quickly if a network issue needs to be resolved. A picture truly does speak a thousand words! The map that is displayed after a fresh installation of Orion NPM is the sample map. The sample map is only a placeholder and does not display any of your nodes on it. To create and customize your own network maps, you need to use the Orion Network Atlas utility. All Nodes and All Groups The next module on the Home Summary page that will strike your attention is All Nodes. All Nodes displays every node that is being monitored by Orion NPM. On the right-hand side of the screen underneath the network map is the All Groups module. This displays any groups that I have configured in Orion NPM. This module operates in the same way that the All Nodes module does, aside from the fact that it shows the status of an entire custom group instead of just the nodes themselves. The view is configurable, but by default, the module displays first by vendor, its up or down status, and then the host name. If the host name is not known, then it will display the IP address of the node. A sample of my own network lab is displayed in the previous screenshot. All Nodes displays what nodes are up and what nodes are not responding to Orion NPM. All Triggered Alerts Continuing down to the left-hand side of the page is the All Triggered Alerts module. This module is very helpful in that it displays the time and date, node title, current value if available (such as Active ), and the description of the latest alerts that Orion NPM triggered. In the preceding example, you can see that on March 11, 2013 at 8:52 A.M., Orion NPM detected high packet loss on the JD-1130AP node. Then one minute later Orion NPM triggered an alert that it could no longer communicate with the node. To see more details about that specific node, click on the node name to open the Node Details View page. Event Summary and Last 25 Events The next modules are Event Summary and Last 25 Events. This module displays a summary list for all types of events related to network monitoring and only displays events from the last 24 hours. It is useful when needing a quick rundown of a total number of events that occurred throughout the day. Since this is only a summary of events from the day, there is very little information provided. To see more information on a specific line of item in the Event Summary module, click on one of the event titles to open a filtered view in the Events web page. Clicking on an event title, such as 5 Alert Triggered shown in the preceding example, will open the Events page with a filter to only display Alert Triggered. Clicking on 2 Node Down event will open the Events page with a filter to only display Node Downinformation. The final module on the Home Summary page is Last 25 Events and shows more event information in a historical context. A sample of what it displays is shown in the following screenshot: From the example, you can see that someone caused quite a few events within a short amount of time. On the JD-3500XL node , you can see that a FastEthernet port was administratively disabled, a port labeled Wireless Trunkcame online, the JD-1130AP node's packet loss rose above the loss threshold, and other important information. The downside of this module is that it will only show the last 25 events but it is extremely useful in assisting with troubleshooting a recent issue. Search nodes Search Nodes is a useful module where you can search a node to quickly access that node's detail view. For example, if I can't remember the name of a node but I do know where it is located, I can search the location description instead of clicking through all of the nodes until I find it. In the following example, I am searching for all nodes in the Orlando location. All of my nodes in the Orlando location are displayed in the search results. From this point, I can click on the node name to open the Node Details View page. Custom Custom Object Resource is a module that allows you to create your own modules by displaying polling data of your choice. Click on the Configure this resource link, or the EDIT button, to view the contents. If there are some modules or resources that you do not need to view or certain pieces of information that you want to add to your pages, you can do so by customizing the web pages and modules. Customizing views A view in Orion NPM is the same thing as a web page. Views are displayed when clicking on a link in the menu bar, when clicking on a node in the dashboard, or when clicking on an interface in the nodes detail view. Each module on each page can be fully re-arranged as you see fit. It is possible that you want to view the network map on the left-hand side column instead of the right-hand side on the home summary page. You can make that change from the view editor. Looking at the Orion Web Administration page, the Views module is where we are going to focus our attention. In it are the following three links: Manage Views Add New View Views by Device Type Manage Views Clicking on Manage Views will display the Manage Views editor. Orion NPM sets up default views in an out-of-the-box installation. To edit a view, highlight it and click on the Edit button. There is no Save or Undo button when editing views in Orion NPM. Once a change has been made to a view, it is permanent. Make note of what settings are in the view you are editing before changing them in case you wish to revert back. To demonstrate how to customize a page view , the following is an example of how to do so with the Orion Summary Home page. In this example I will perform the following: Move the Map from the right-hand side column to the left-hand side Gather All Nodes, All Groups, and All Dependencies together in the right-hand side column under All Nodes Place All Triggered Alerts, Event Summary, and Last 25 Events under the Map on the left-hand side column Remove the Search module Set the columns to be of equal width Perform the following steps to execute these tasks: Highlight Map,All Triggered Alerts, and Event Summary in Column 2 then click on the left arrow button. Hold the Ctrl key on the keyboard in order to select multiple options in the column. Reorder the Column 1 list by using the up arrow button until Map is first on the list. Highlight All Nodes in Column 1, then click on the right arrow button to move it to Column 2. Reorder the Column 2 list by using the up arrow button until All Nodes is first on the list. Highlight Search Nodes. Click on the red X to remove it from the column. Click on the Edit button next to the column widths. Change the column widths for columns 1 and 2 to 500. Ensure that the Layout is set for two columns. Click on SUBMIT when finished. Your page view should now look like the following screenshot: The name of the view is also the web page's title. I decided not to change the name of the view for the sake of simplicity. Also, I did not apply a view limitation. The Orion Summary Home view should not have a view limitation applied since it is the main view page to Orion NPM. Applying a view limitation may omit important node information, or the limitation may render the page useless. Click on the PREVIEW button to preview the view layout in a new web browser window. It will look similar to the following screenshot: Since I was satisfied with my changes, I clicked DONE. As you can see, the options available when editing a view are extremely straightforward. You can change the resources (or modules) for each column, edit the column sizes, and attach view limitations. The only item you cannot change when customizing a view is the type. When you change the name of a view, only the title is changed. Its contents will not change. Add New View Orion NPM already has a great deal of default pages and default views associated with each page. However, there may be cases where a default view will not suffice and you want to create your own. The following are a few examples: Create a view that consists of a specific customer's equipment Create a view for all volumes in a VSAN Create a custom view for all monitored UPS units Create a custom view that lists all nodes in a single location The list could go on, but you can see that there are several reasons to create your own view. Defining the name of the view is the first step. The name you define will be the title of the page when it is saved. Make sure it is a meaningful name and one that makes sense for what you are creating. Second, you need to choose what type of view it will be. There are several different types of views and all suit different purposes. The list of view types is as follows: Summary: Displays network-wide information. Summary is the default option. If you will be creating a view that includes multiple hardware types or locations, the Summary type will be your best option. Node Details: Displays information about a single node. This is the option that you would choose when creating a customized view page for a hardware device type. For example, you could create a custom view specifically for firewalls. Volume Details: Displays information about a single volume. Depending on your needs, you may want to create a custom view for a volume within one of your servers. Group Details: Displays information about groups. Interface Details: Displays information about a single interface. VSAN Details: Displays information about a single virtual storage area network device. UCS Chassis Details: Displays information about a Cisco Unified Computing System chassis. If you needed to create a customized view for a UCS node, this is the view type to choose. Virtualization Summary: Displays information about your VMware infrastructure. Cluster Details: Displays information about a VMware cluster. Datacenter Details: Displays information about a VMware Datacenter. Third, you need to select the resources for each column. These are the exact same resources available when editing a view as discussed in the Manage Views section. The last item that you can de fi ne is a view limitation. A view limitation is optional and it will limit the network devices that can be displayed within this view. An example of needing to apply a view limitation would be to limit this page to only display nodes from a specific hardware manufacturer. Or, you could add a limitation to only display nodes that reside in a specific location. The reasons why you would want to apply a limitation are virtually endless. Just keep in mind that view limitations are optional and are not required in order to create a new view. Only one limitation can be applied to a view. It is not possible to apply multiple view limitations. The following is an example of how to create a custom view page with a limitation applied to only display access points: In the Add New View wizard, enter the name Access Points and choose Summary as the view type. Click on SUBMIT to continue. Click on the Edit button next to the column widths. Select the two columns and set the widths of columns 1 and 2 to 500 and 400 respectively. Add resources to Column 1 by clicking on the plus button. The Add Resources page appears. Expand Node Lists – All Nodes and Grouped Node Lists and place a check mark next to All Nodes. Click on SUBMIT to continue. Add resources to Column 2 by clicking on the plus button. Expand Summary Reports – Various Reports Showing Problem Areas and place a check mark next to Current Traffic on All Interfaces . Click on SUBMIT to continue. Scroll down to View Limitation and click on the Edit button. Select Group of Nodes, scroll to the bottom of the page, and click on CONTINUE. Place a check mark next to each node. In this example, the hostnames JD-1130AP-1 and JD-1131AP-2 are access points I am applying this new view against. Click on SUBMIT to apply the limitation. Click on DONE to finish. The following is how the new custom view will look like based on the example provided. This is only one way to create a custom view for Access Points, and there are plenty of other resources that I could have added to the page. Also, I could have chosen a Node Detail view type instead of Summary type. This is only one simple example of how to create a new view in Orion NPM. I encourage you to experiment with creating your own custom views to become more familiar with the process. Views by Device Type Views By Device Type is where you can customize which page displays when looking at a specific type of node. For example, I can force Orion NPM to display a different Node Details View page for a specific model of hardware. This is helpful for when Orion NPM does not have a details view page for a hardware type, such as a UPS unit that can be monitored via SNMP. Only device types that are currently being monitored by Orion NPM will be displayed when editing views by device type. You cannot apply custom views against a specific hardware type unless Orion NPM is currently monitoring that type of device. Most options have the (default) option selected. The default view for almost every node monitored by Orion NPM is the Node Details View. Some exceptions to this rule are VMware nodes where ESX Host Details is automatically chosen. Menu bars When working with the Orion dashboard, you have already seen the tabs and menu bars at the top of the page. All of the menu bar types can be customized as you see fit. You can even create your own custom menu bars. To customize menu bars, click on the Customize Menu Bars link in Orion Web Administration. Menu bars are assigned to one of the three tabs at the top of the dashboard. As shown in the following screenshot, there are five different menu bars from an out-of-the-box installation: Orion NPM includes five default menu bars: Admin, Default, Guest, Network_TabMenu, and Virtualization_TabMenu. Admin is the only menu bar that cannot be deleted from Orion NPM but it can be edited. To edit a menu bar, click on the Edit button under its title and the Edit Menu Bar wizard will be displayed. Simply drag-and-drop the available item you wish to add to the menu bar from the right-hand side to the left-hand side column. When finished, click on the SUBMIT button. When creating a brand new menu bar, the same editing process applies. In addition, menu bars are assigned to a user account's view settings through the Manage Users wizard. This means that when you create a new menu bar, you will need to assign it to a user account from the Manage User Accounts wizard. You cannot create your own tabs (that is Home, Network, Virtualization) in Orion NPM. You can only edit and create menu bars and assign them to a tab. Editing Resources While Orion NPM's default views will suit almost every need out-of-the-box, it is still a great idea to dive into all of the view settings of a module and view what you are able to customize. Every module in the dashboard allows an administrator to edit the module in some way by clicking on the EDIT button on the top-right corner of the resource. As an example, open the All Nodes view in the home summary screen, then click on the EDIT button to be presented with a list of options. Every single module in Orion NPM can be edited, to a certain limit. You can always edit the title of the module as well as the subtitle in case the default descriptions are difficult to understand. For All Nodes, you can edit the grouping list for up to three levels. An example of creating a view for geographic locations is to set the first level to City, then leaving the second and third levels set to None. This will display only the city name at the top level, then the node names underneath. In the following example, it is easy to see how simple this type of view can be: For medium to large network sizes, a more appropriate view option is to set the first level to Location then level two to Department. Feel free to set the grouping display to one that will suit your needs. The remaining settings in the Edit Resource page are: Put nodes with null values for the grouping property Remember Expanded Groups Filter Nodes (SQL) When Orion NPM does not know a specific property for a node, such as its location or department, the Put nodes with null values for the grouping property setting tells Orion NPM how to group these nodes. There are two options available. We can place nodes In the [Unknown] group or At the bottom of the list, in no group. Placing nodes in the [Unknown] group will have Orion NPM display these nodes with unknown properties (or blank properties) with the group title [Unknown], which will be displayed at the top. The following is an example: The second option At the bottom of the list, in no group will do just that. Any node that has unknown or blank values will be placed at the bottom of the node list in the generic Unknown group. By default, the Orion dashboard website will trigger a browser page refresh every few minutes. When the page refreshes, if you a expanded a view in a module (a.k.a drill-down view by clicking on the plus button) the drill-down view will reset. The checkbox for Remember Expanded Groups is enabled by default and it is a good idea to leave this checked. The final option is Filter Nodes (SQL). This is an advanced feature of Orion NPM where you can use an explicit SQL string as a filter for these views. For example, use the filter Status<>1 to filter out all nodes that are operationally up and only view nodes that are down in the All Nodes module. SQL filters are helpful when creating custom views for administrative personnel. For more SQL filter examples, expand Show Filter Examples . Also, you can click on the Help button in the module for more examples and guidance on how to perform SQL queries. Just as in creating new views, you may have noticed by now that there is no cancel or revert option when changing a view setting in a module or a page. If you made a setting change but do not want to save the new setting, simply click on the Back button in your web browser to go back to the previous page without saving the new settings. Make sure that you don't change a view setting that you didn't intend to. Customization Orion NPM allows administrators to change a few aspects of the dashboard interface from the Customize module in Orion Web Administration. There are three different customization options available; Customize Menu Bars, Color Scheme, and External Websites. Color Scheme Orion NPM includes several color schemes that can be changed on the fly. To change the dashboard color scheme, click on the Color Scheme link in Orion Web Administration, choose the Color Scheme option, and click on the SUBMIT button. Personally, I always use the Orion Default (white) because I can never decide which color to use! External Websites The External Websites option in the Customize module is an interesting one. This option enables an administrator to add some external website to the Orion NPM dashboard as if it is a part of the dashboard itself. For example, if you have an internal Microsoft SharePoint team site on your domain, you could add it to the Admin menu bar and have the team site act as if it is a part of the console. When adding an external website, it must be in URL format such as https://URL. The following is an example of how to add an external website: Click on External Websites in Orion Web Administration and then click on the ADD button. Enter the Menu Title, Page Title, URL of the website, and which Menu Bar you want to apply the link to. In the following example, I am adding a link to www.SolarWinds.com on the NETWORK tab at the top of the dashboard page. Click on OK to finish. The external web link will appear now appear in the NETWORK tab. When clicking on the link, the web page will appear as if it is embedded in the Orion dashboard. This sums up the discussion on customizing web views, modules, and other aspects of the dashboard. Now, we will discuss how to use Orion NPM to monitor your devices.
Read more
  • 0
  • 0
  • 11122

article-image-creating-web-server
Packt
03 Sep 2015
3 min read
Save for later

Creating a Web Server

Packt
03 Sep 2015
3 min read
 In this article by Marco Schwartz, author of the book Intel Galileo Networking Cookbook, we will cover the recipe, Reading pins via a web server. (For more resources related to this topic, see here.) Reading pins via a web server We are now going to see how to use a web server for useful things. For example, we will see here how to use a web server to read the pins of the Galileo board, and then how to display these readings on a web page. Getting ready For this chapter, you won't need to do much with your Galileo board, as we just want to see if we can read the state of a pin from a web server. I simply connected pin number 7 of the Galileo board to the VCC pin, as shown in this picture: How to do it... We are now going to see how to read the state from pin number 7, and display this state on a web page. This is the complete code: // Required modules var m = require("mraa"); var util = require('util'); var express = require('express'); var app = express(); // Set input on pin 7 var myDigitalPin = new m.Gpio(7); myDigitalPin.dir(m.DIR_IN); // Routes app.get('/read', function (req, res) { var myDigitalValue = myDigitalPin.read(); res.send("Digital pin 7 value is: " + myDigitalValue); }); // Start server var server = app.listen(3000, function () { console.log("Express app started!"); }); You can now simply copy this code and paste it inside a blank Node.js project. Also make sure that the package.json file includes the Express module. Then, as usual, upload, build, and run the application using Intel XDK. You should see the confirmation message inside the XDK console. Then, use a browser to access your board on port 3000, at the /read route. You should see the following message, which is the reading from pin number 7: If you can see this, congratulations, you can now read the state of the pins on your board, and display this on your web server! How it works... In this recipe, we combined two things that we saw in previous recipes. We again used the mraa module to read from pins, here from pin number 7 of the board. You can find out more about the mraa module at: https://github.com/intel-iot-devkit/mraa Then, we combined this with a web server using the Express framework, and we defined a new route called /read that reads the state of the pin, and sends it back so that it can be displayed inside a web browser, with this code: app.get('/read', function (req, res) { var myDigitalValue = myDigitalPin.read(); res.send("Digital pin 7 value is: " + myDigitalValue); }); See also You can now check the next recipe to see how to control a pin from the Node.js server running on the Galileo board. Summary In this recipe, we saw how to read the state from pin number 7, and display this state on a web page. If you liked this article please buy the book Intel Galileo Networking Cookbook, Packt Publishing, to learn over 45 Galileo recipes. Resources for Article: Further resources on this subject: Arduino Development[article] Integrating with Muzzley[article] Getting Started with Intel Galileo [article]
Read more
  • 0
  • 0
  • 11060

article-image-getting-started-cloud-only-scenario
Packt
13 Sep 2016
23 min read
Save for later

Getting Started with a Cloud-Only Scenario

Packt
13 Sep 2016
23 min read
In this article by Jochen Nickel from the book, Mastering Identity and Access Management with Microsoft Azure, we will first start with a business view to identify the important business needs and challenges of a cloud-only environment and scenario. Throughout thisarticle, we will also discuss the main features of and licensing information for such an approach. Finally,we will round up with the challenges surroundingsecurity and legal requirements. The topics we will cover in this articleare as follows: Identifying the business needs and challenges An overview of feature and licensing decisions Defining the benefits and costs Principles of security and legal requirements (For more resources related to this topic, see here.) Identifying business needs and challenges Oh! Don't worry, we don’t have the intention of boring you with a lesson of typical IAM stories – we're sure you've been in touch with a lot of information in this area. However, you do need to have an independentview of the actual business needs and challenges in the cloud area, so that you can get the most out of your own situation. Common Identity and Access Management needs Identity and Access Management(IAM) is the disciplinethat plays an important role in the actual cloud era of your organization. It’s also of valueto smalland medium-sizedcompanies, so that they can enable the right individuals to access the right resources from any location and device, at the right time and for the right reasons, to empower and enable the desired business outcomes. IAM addresses the mission-critical need of ensuringappropriate and secure access to resources inside and across company borders,such as cloud or partner applications. The old security strategy of only securing your environment with an intelligent firewall concept and access control lists will take on a more and more subordinated role. There is arecommended requirement of reviewing and overworking this strategyin order to meet higher compliance and operational and business requirements. To adopt a mature security and risk management practice, it's very important that your IAMIAM strategy is business-aligned and that the required business skills and stakeholders are committed to this topic. Without clearly defined business processes you can't implement a successful IAM-functionality in the planned timeframe. Companies that follow this strategy can become more agile in supporting new business initiatives and reduce their costs inIAM. The following three groups show the typicalindicators for missing IAM capabilities on the premises and for cloud services: Your employees/partners: Same password usage across multiple applications without periodic changes (also in social media accounts) Multiple identities and logins Passwords are written downinSticky Notes, Excel, etc. Application and data access allowed after termination Forgotten usernames and passwords Poor usability of application access inside and outside the company (multiple logins, VPN connection required, incompatible devices, etc.) Your IT department: High workload on Password Reset Support Missing automated identity lifecycles with integrity (data duplication and data quality problems) No insights in application usage and security Missing reporting tools for compliance management Complex integration of central access to Software as a Service (SaaS), Partner and On-Premise applications (missing central access/authentication/authorization platform) No policy enforcement in cloud services usage Collection of access rights(missing processes) Your developers: Limited knowledge ofall the different security standards, protocols, and APIs Constantly changing requirements and rapid developments Complex changes of the Identity Provider Implications of Shadow IT On top of that, often the ITdepartment will hear the following question:When can we expect the new application for our business unit?Sorry, but the answer will always take too long. Why should I wait? All I need is a valid credit card that allows me to buy my required business application, butsuddenly another popular phenomenon pops up:The shadow IT!. Most of the time, this introduces another problem— uncontrolled information leakage. The following figure shows the flow of typical information — and that what you don't know can hurt! The previous figure should not give you the impression that cloud services are inherently dangerous, rather that before using them you should first be aware that,[R1]  and in which manner they are being used. Simplymigrating or ordering a new service in the cloud won't solve common IAMneeds. This figure should help you to imagine that, if not planned, the introduction of a new or migrated service brings with it a new identity and credential set for the users, and therefore multiple credentials and logins to remember! You should also be sure which information can be stored and processed in aregulatory area other than your own organization. The following table shows the responsibilities involved when using the different cloud service models. In particular, you should identify thatyou are responsible for data classification, IAM, and endpoint security in every model: Cloud Service Modell IaaS   PaaS   SaaS   Responsibility Customer Provider Customer Provider Customer Provider Data Classification X   X   X   End Point Security X   X   X   Identity and Access Management X   X X X X Application Security X   X X   X Network Controls X   X X   X Host Security   X   X   X Physical Security   X   X   X The mobile workforce and cloud-first strategy Many organizations are facing the challengeof meeting the expectations of a mobile workforce, all with their own device preferences, a mix of private and professional commitments, and the request to use social media as an additional way of business communication. Let's dive into a short, practical but extreme example. The AzureID company employs approximately 80 employees. They work with a SaaS landscape of eight services to drive all their business processes. On the premises, they use Network-Attached Storage(NAS) to store some corporate data and provide network printers to all of the employees. Some of the printers are directly attached to the C-level of the company. The main issues today are that the employees need to remember all their usernames and passwords of all the business applications, and if they want to share some information with partners they cannot give them partial access to the necessary information in a secure and flexible way. Another point is if they want to access corporate data from their mobile device, it's always a burden to provide every single login for the applications necessary to fulfil their job. The small IT department with oneFull-time Equivalent(FTE) is overloaded with having to create and manage every identity in each different service. In addition, users forget their passwords periodically, and most of the time outside normal business hours. The following figure shows the actual infrastructure: Let's analyze this extreme example to revealsome typical problems,so that you can match some ideas to your IT infrastructure: Provisioning, managing,and de-provisioning identities can be a time-consuming task There are no single identity and credentials for the employees There is no collaboration support for partner and consumer communication There is no Self-Service Password Reset functionality Sensitive information leaves the corporation over email There are no usage or security reports about the accessed applications/services There is no central way to enable multi-factor authentication for sensitive applications There is no secure strategy for accessing social media There is no usable, secure,and central remote access portal Remember, shifting applications and services to the cloud just introduces more implications/challenges, not solutions. First of all, you need your IAM functionality accurately in place.You also need to alwayshandle on-premises resourceswith minimal printer management. An overview of feature and licensing decisions With the cloud-first strategy of Microsoft, the Azure platform and their number of services grow constantly, and we have seen a lot of costumers lost in a paradise of wonderful services and functionality. This brings usto the point of how to figure out the relevant services for IAMfor you and how to give them the space for explanation. Obviously, there are more services available that stripe this field [R2] with a small subset of functionality, but due to the limited page count of this book and our need for  rapid development we will focus on the most important ones, and will reference any other interesting content. The primary service for IAMis the Azure Active Directory service, which has also been the core directory service for Office 365 since 2011. Every other SaaS offering of Microsoft is also based on this core service, including Microsoft Intune, Dynamics, and Visual Studio Online. So, if you are already an Office 365 customer you willhave your own instance of Azure Active Directory in place. For sustained access management and the protection of your information assets, the Azure Rights Management services are in place. There is also an option for Office 365 customers to use the included Azure Rights Management services. You can find further information about this by visiting the following link:http://bit.ly/1KrXUxz. Let's get started with the feature sets that can provide a solution, as shown in the following screenshot: Including Azure Active Directory and Rights Managementhelps you to provide a secure solution with a central access portal for all of your applications with just one identity and login for your employees, partners, and customers that you want to share your information with. With a few clicks you can also add multi-factor authentication to your sensitive applications and administrative accounts. Furthermore, you can directly add a Self-Service Password Reset functionality that your users can use to reset their passwordfor themselves. As the administrator, you will receive predefined security and usage reports to control your complete application ecosystem. To protect your sensible content, you will receive digital rights management capabilities with the Azure Rights Management services to give you granular access rights on every device your information is used. Doesn’t it sound great? Let's take a deeper look into the functionality and usage of the different Microsoft Azure IAMservices. Azure Active Directory Azure Active Directory is a fully managed multi-tenant service that provides IAMcapabilities as a service. This service is not just an instance of theWindows Server Domain Controller you already know from your actual Active Directory infrastructure. Azure AD is not a replacement for the Windows Server Active Directory either. If you already use a local Active Directory infrastructure, you can extend it to the cloud by integrating Azure AD to authenticate your users in the same way ason-premise and cloud services. Staying in the business view, we want to discuss some of the main features of Azure Active Directory. Firstly, we want to start with the Access panel that gives the user a central place to access all his applications from any device and any location with single-sign-on. The combination of the Azure Active Directory Access panel and the Windows Server 2012 R2/2016 Web Application Proxy / ADFS capabilities provide an efficient way to securely publish web applications and services to your employees, partners, and customers. It will be a good replacement for your retired Forefront TMG/UAG infrastructure. Over this portal,your users can do the following: User and group management Access their business relevant applications(On-premise, partner, and SaaS) with single-sign-on or single logon Delegation of access control to the data, process, or project owner Self-service profile editing for correcting or adding information Self-service password change and reset Management of registered devices With the Self-Service Password Reset functionality, a user gets a straight forward way to reset his password and to prove his identity, for example through a phone call, email, or by answering security questions. The different portals can be customized with your own Corporate Identity branding. To try the different portals, just use the following links: https://myapps.microsoft.com and https://passwordreset.microsoftonline.com. To complete our short introduction to the main features of the Azure Active Directory, we will take a look at the reporting capabilities. With this feature you get predefined reports with the following information provided. With viewing and acting on these reports, you are able to control your whole application ecosystem published over the Azure AD access panel. Anomaly reports Integrated application reports Error reports User-specific reports Activity logs From our discussions with customers we recognize that, a lot of time, the differences between the different Azure Active Directory editions are unclear. For that reason, we will include and explain the feature tables provided by Microsoft. We will start with the common features and then go through the premium features of Azure Active Directory. Common features First of all, we want to discuss the Access panel portal so we can clear up some open questions. With the Azure AD Free and Basic editions, you can provide a maximum of 10 applications to every user. However, this doesn't mean that you are limited to 10 applications in total. Next, the portal link: right now it cannot be changed to your own company-owned domain, such ashttps://myapps.inovit.ch. The only way you can do so is by providing an aliasin your DNS configuration; the accessed link is https://myapps.microsoft.com. Company branding will lead us on to the next discussion point, where we are often asked how much corporate identity branding is possible. The following link provides you with all the necessary information for branding your solution:http://bit.ly/1Jjf2nw. Rounding up this short Q&Aon the different feature sets isApplication Proxy usage, one of the important differentiators between the Azure AD Free and Basic editions. The short answer is that with Azure AD Free, you cannot publish on-premises applications and services over the Azure AD Access Panel portal. Features AAD Free AAD Basic AAD Premium Directory as a Service (objects) 500k unlimited unlimited User/Group management (UI or PowerShell) X X X Access Panel portal for SSO (per user) 10 apps 10 apps unlimited User-based application access management/provisioning X X X Self-service password change (cloud users) X X X Directory synchronization tool X X X Standard security reports X X X High availability SLA (99.9%)   X X Group-based application access management and provisioning   X X Company branding   X X Self-service password reset for cloud users   X X Application Proxy   X X Self-service group management for cloud users   X X Premium features The Azure Active Directory Premium edition provides you with the entireIAMcapabilities, including the usage licenses of the on-premises used Microsoft Identity Manager. From a technical perspective, you need to use the Azure AD Connect utility to connect your on-premises Active Directory with the cloud and the Microsoft Identity Manager to manage your on-premises identities and prepare them for your cloud integration. To acquire Azure AD Premium, you can also use the Enterprise Mobility Suite(EMS) bundle, which contains Azure AD Premium, Azure Rights Management, Microsoft Intune,and Advanced Threat Analytics(ATA) licensing. You can find more information about EMS by visitinghttp://bit.ly/1cJLPcM and http://bit.ly/29rupF4. Features       Azure AD Premium Self-service password reset with on-premiseswrite-back     X Microsoft Identity Manager server licenses     X Advanced anomaly security reports     X Advanced usage reporting     X Multi-Factor Authentication (cloud users)     X Multi-Factor Authentication (on-premises users)     X Azure AD Premium reference: http://bit.ly/1gyDRoN Multi-Factor Authentication for cloud users is also included in Office 365. The main difference is that you cannot use it for on-premises users and services such as VPN or web servers. Azure Active Directory Business to Business One of the newest features based on Azure Active Directory is the new Business to Business(B2B) capability. The new product solves the problem of collaboration between business partners. It allows users to share business applications between partners without going through inter-company federation relationships and internally-managed partner identities. With Azure AD B2B, you can create cross-company relationships by inviting and authorizing users from partner companies to access your resources. With this process,each company federates once with Azure AD and each user is then represented by a single Azure AD account. This option also provides a higher security level, because if a user leaves the partner organization,access is automatically disallowed. Inside Azure AD, the user will be handled as though a guest, and they will not be able to traverse other users in the directory. Real permissions will be provided over the correct associated group membership. Azure Active Directory Business to Consumer Azure Active Directory Business to Consumer(B2C) is another brand new feature based on Azure Active Directory. This functionality supports signing in to your application using social networks like Facebook, Google, or LinkedIn and creating accounts with usernames and passwords specifically for your company-owned application. Self-service password management and profile management are also provided with this scenario. Additionally, Multi-Factor Authentication introduces a higher grade of security to the solution. Principally, this feature allows small and medium companies to hold their customers in a separated Azure Active Directory with all the capabilities, and more,in a similar way to the corporate-managed Azure Active Directory. With different verification options, you are also able to provide the necessary identity assurance required for more sensible transactions. Azure Active Directory Privileged Identity Management Azure AD Privileged Identity Management provides you the functionality to manage, control, and monitor your privileged identities. With this option, you can build up an RBAC solution over your Azure AD and other Microsoft online services, such as Office 365 or Microsoft Intune. The following activities can be reached with this functionality: You can discover the actual configured Azure AD Administrators You can providejust in time administrative access You can get reports about administrator access history and assignment changes You can receive alerts about access to a privileged role The following built-in roles can be managed with the current version: Global Administrator Billing Administrator Service Administrator User Administrator Password Administrator Azure Multi-Factor Authentication Protecting sensible information or application access with additional authentication is an important task not just in the on-premises world. In particular, it needs to be extended to every used sensible cloud service. There are a lot of variations for providing this level of security and additional authentication, such ascertificates, smart cards, or biometric options. For example, smart cards have  a dependency on special hardware used to read the smart card and cannot be used in every scenario without limiting the access to a special device or hardware or. The following table gives you an overview of different attacks and how they can be mitigated with a well designed and implemented security solution. Attacker Security solution Password brute force Strong password policies Shoulder surfing Key or screen logging One-time password solution Phishing or pharming Server authentication (HTTPS) Man-in-the-Middle Whaling (Social engineering) Two-factor authentication Certificate or one-time password solution Certificate authority corruption Cross Channel Attacks (CSRF) Transaction signature and verification Non repudiation Man-in-the-Browser Key loggers Secure PIN entry Secure messaging Browser (read only) Push button (token) Three-factor authentication The Azure Multi-Factor Authentication functionality has been included in the Azure Active Directory Premium capabilities to address exactly the attacks described in the previous table. With a one-time password solution, you can build a very capable security solution to access information or applications from devices that cannot use smart cards as the additional authentication method. Otherwise, for small or medium business organizations a smart card deployment, including the appropriate management solution will, be too cost-intensive and the Azure MFA solution can be a good alternative for reaching the expected higher security level. In discussions with our customers, we recognized that a lot don't realize that Azure MFA is already included in different Office 365 plans. They would be able to protect their Office 365 with multi-factor out-of-the-box but they don't know it! This brings us to Microsoft and the following table, which compares the functionality between Office 365 and Azure MFA. Feature O365 Azure Administrators can enable/enforce MFA to end-users X X Use mobile app (online and OTP) as second authentication factor X X Use phone call as second authentication factor X X Use SMS as second authentication factor X X App passwords for non-browser clients (for example, Outlook, Lync) X X Default Microsoft greetings during authentication phone calls X X Remember Me X X IP Whitelist   X Custom greetings during authentication phone calls   X Fraud alert   X Event confirmation   X Security reports   X Block/unblock users   X One-time bypass   X Customizable caller ID for authentication phone calls   X MFA Server – MFA for on-premises applications   X MFA SDK – MFA for custom apps   X With the Office 365 capabilities of MFA, the administrators are able to use basic functionality to protect their sensible information. In particular,if integrating on-premises users and services, the Azure MFA solution is needed. Azure MFA and the on-premises installation of the MFA server cannot be used to protect your Windows Server DirectAccess implementation. Furthermore, you will find the customizable caller ID limited to special regions. Azure Rights Management More and more organizations are in the position to provide a continuous and integrated information protection solution to protect sensible assets and information. On one side stands the department, which carries out its business activities, generates the data, and then processes. Furthermore, it uses the data inside and outside the functional areas, passes it, and runs a lively exchange of information. On the other hand, revision is requiredby legal requirements that prescribe measures to ensure that information is dealt with and  dangers such as industrial espionage and data loss are avoided. So,thisis a big concern when safeguarding sensitive information. While the staff appreciate the many ways of communication and data exchange, this development starts stressing the IT security officers and makes the managers worried. The fear is that critical corporate data staysin an uncontrolled manner and leaves the company or moves to competitors. The routes are varied, but data is often lost ininadvertent delivery via email. In addition, sensitive data can leave the company on a USB stick and smartphone, or IT media can be lost or stolen. In addition, new risks are added, such as employees posting information on social media platforms.IT must ensure the protection of data in all phases, and traditional IT security solutions are not always sufficient. The following figure illustrates this situation and leads us to the Azure Rights Management services. Like itsother additional features, the base functional is included in different Office 365 plans. The main difference between the two is  that only the Azure RMS edition can be integrated in an on-premises file server environment in order to be able to use the File Classification Infrastructure feature of the Windows Server file server role. The Azure RMS capability allows you to protect your sensitive information based on classification information with a granular access control system. The following table provided from Microsoft shows the differences between the Office 365 and Azure RMS functionality.Azure RMS is included with E3, E4, A3, and A4 plans. Feature RMS O365 RMS Azure Users can create and consume protected content by using Windows clients and Office applications X X Users can create and consume protected content by using mobile devices X X Integrates with Exchange Online, SharePoint Online, and OneDrive for Business X X Integrates with Exchange Server 2013/Exchange Server 2010 and SharePoint Server 2013/SharePoint Server 2010 on-premises via the RMS connector X X Administrators can create departmental templates X X Organizations can create and manage their own RMS tenant key in a hardware security module (the Bring Your Own Key solution) X X Supports non-Office file formats: Text and image files are natively protected; other files are generically protected X X RMS SDK for all platforms: Windows, Windows Phone, iOS, Mac OSX, and Android X X Integrates with Windows file servers for automatic protection with FCI via the RMS connector   X Users can track usage of their documents   X Users can revoke access to their documents   X In particular, the tracking feature helps users to find where their documents are distributed and allows them to revoke access to a single protected document. Microsoft Azure security services in combination Now that we have discussed the relevant Microsoft Azure IAMcapabilities, you can see that Microsoft provides more than just single features or subsets of functionality. Furthermore, it brings a whole solution to the market, which provides functionality for every facet of IAM. Microsoft Azure also combines clear service management with IAM, making it a rich solution for your organization. You can work with that toolset in a native cloud-first scenario, hybrid, and a complex hybrid scenario and can extend your solution to every possible use case or environment. The following figure illustrates all the different topics that are covered with Microsoft Azure security solutions: Defining the benefits and costs The Microsoft Azure IAMcapabilities help you to empower your users with a flexible and rich solution that enables better business outcomes in a more productive way. You help your organization to improve the regulatory compliance overall and reduce the information security risk. Additionally, it can be possible to reduce IT operating and development costs by providing higher operating efficiencyand transparency. Last but not least,it will lead toimproved user satisfaction and better support from the business for further investments. The following toolset gives you very good instruments for calculatingthe costs of your special environment. Azure Active Directory Pricing Calculator:http://bit.ly/1fspdhz Enterprise Mobility Suite Pricing:http://bit.ly/1V42RSk Microsoft Azure Pricing Calculator:http://bit.ly/1JojUfA Principles of security and legal requirements The classification of data, such as business information or personal data, is not only necessary for an on-premises infrastructure. It is a basis for the assurance of business-related information and is represented by compliance with official regulations. These requirements are ofgreater significance when using cloud services or solutions outside your own company and regulation borders. They are clearly needed for a controlled shift of data in an area in which responsibilities on contracts must be regulated. Safety limits do not stop at the private cloud, and are responsible for the technical and organizational implementation and control of security settings. The subsequent objectives are as follows: Construction, extension, or adaptation of the data classification to the Cloud Integration Data classification as a basis for encryption or isolated security silos Data classification as a basis for authentication and authorization Microsoft itself has strict controls that restrict access to Azure to Microsoft employees. Microsoft also enables customers to control access to their Azure environments, data, and applications, as well as allowing them to penetrate and audit services with special auditors and regulations on request. A statement from Microsoft: Customers will only use cloud providers in which they have great trust. They must trust that the privacy of their information will be protected, and that their data will be used in a way that is consistent with their expectations.We build privacy protections into Azure through Privacy by Design. You can get all the necessary information about security, compliance, and privacy by visiting the following link http://bit.ly/1uJTLAT. Summary Now that you are fully clued up with information about typical needs and challenges andfeature and licensing information, you should be able to apply the right technology and licensing model to your cloud-only scenario. You should also be aware of the benefits and cost calculators that will help you calculate a basic price model for your required services. Furthermore, you canalso decide which security and legal requirements are relevant for your cloud-only environments. Resources for Article: Further resources on this subject: Creating Multitenant Applications in Azure [article] Building A Recommendation System with Azure [article] Installing and Configuring Windows Azure Pack [article]
Read more
  • 0
  • 0
  • 10914
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-internet-things-xively
Packt
19 Aug 2014
2 min read
Save for later

Internet of Things with Xively

Packt
19 Aug 2014
2 min read
In this article by Marco Schwartz, author of the book Arduino Networking, we will visualize the data we recorded with Xively. (For more resources related to this topic, see here.) Visualizing the recorded data We are now going to visualize the data we recorded with Xively. You can go over again to the device page on the Xively website. You should see that some data has been recorded in different channels, as shown in the following screenshot: By clicking on one of these channels, you can also display the data graphically. For example, the following screenshot shows the temperature channel after a few measurements: After a while, you will have more points for the temperature measurements, as shown in the following screenshot: You can also do the same for the humidity measurements; the following screenshot shows the humidity measurements: Note that by clicking on the time icon, you can change the time axis and display a longer or shorter time range. If you don't see any data being displayed, you need to go back to the Arduino IDE and make sure that the answer coming from the Xively server is a 200 OK message, like we saw in the previous section. Summary In this article, we went to the Xively website to visualize the data and learned how to visualize the data graphically, and saw this data arrive in real time. Resources for Article: Further resources on this subject: Avoiding Obstacles Using Sensors [Article] Hardware configuration [Article] Writing Tag Content [Article]
Read more
  • 0
  • 0
  • 10832

article-image-client-and-server-applications
Packt
16 Jun 2015
27 min read
Save for later

Client and Server Applications

Packt
16 Jun 2015
27 min read
In this article by, Sam Washington and Dr. M. O. Faruque Sarker, authors of the book Learning Python Network Programming, we're going to use sockets to build network applications. Sockets follow one of the main models of computer networking, that is, the client/server model. We'll look at this with a focus on structuring server applications. We'll cover the following topics: Designing a simple protocol Building an echo server and client (For more resources related to this topic, see here.) The examples in this article are best run on Linux or a Unix operating system. The Windows sockets implementation has some idiosyncrasies, and these can create some error conditions, which we will not be covering here. Note that Windows does not support the poll interface that we'll use in one example. If you do use Windows, then you'll probably need to use ctrl + break to kill these processes in the console, rather than using ctrl - c because Python in a Windows command prompt doesn't respond to ctrl – c when it's blocking on a socket send or receive, which will be quite often in this article! (and if, like me, you're unfortunate enough to try testing these on a Windows laptop without a break key, then be prepared to get very familiar with the Windows Task Manager's End task button). Client and server The basic setup in the client/server model is one device, the server that runs a service and patiently waits for clients to connect and make requests to the service. A 24-hour grocery shop may be a real world analogy. The shop waits for customers to come in and when they do, they request certain products, purchase them and leave. The shop might advertise itself so people know where to find it, but the actual transactions happen while the customers are visiting the shop. A typical computing example is a web server. The server listens on a TCP port for clients that need its web pages. When a client, for example a web browser, requires a web page that the server hosts, it connects to the server and then makes a request for that page. The server replies with the content of the page and then the client disconnects. The server advertises itself by having a hostname, which the clients can use to discover the IP address so that they can connect to it. In both of these situations, it is the client that initiates any interaction – the server is purely responsive to that interaction. So, the needs of the programs that run on the client and server are quite different. Client programs are typically oriented towards the interface between the user and the service. They retrieve and display the service, and allow the user to interact with it. Server programs are written to stay running for indefinite periods of time, to be stable, to efficiently deliver the service to the clients that are requesting it, and to potentially handle a large number of simultaneous connections with a minimal impact on the experience of any one client. In this article, we will look at this model by writing a simple echo server and client, which can handle a session with multiple clients. The socket module in Python perfectly suits this task. An echo protocol Before we write our first client and server programs, we need to decide how they are going to interact with each other, that is we need to design a protocol for their communication. Our echo server should listen until a client connects and sends a bytes string, then we want it to echo that string back to the client. We only need a few basic rules for doing this. These rules are as follows: Communication will take place over TCP. The client will initiate an echo session by creating a socket connection to the server. The server will accept the connection and listen for the client to send a bytes string. The client will send a bytes string to the server. Once it sends the bytes string, the client will listen for a reply from the server When it receives the bytes string from the client, the server will send the bytes string back to the client. When the client has received the bytes string from the server, it will close its socket to end the session. These steps are straightforward enough. The missing element here is how the server and the client will know when a complete message has been sent. Remember that an application sees a TCP connection as an endless stream of bytes, so we need to decide what in that byte stream will signal the end of a message. Framing This problem is called framing, and there are several approaches that we can take to handle it. The main ones are described here: Make it a protocol rule that only one message will be sent per connection, and once a message has been sent, the sender will immediately close the socket. Use fixed length messages. The receiver will read the number of bytes and know that they have the whole message. Prefix the message with the length of the message. The receiver will read the length of the message from the stream first, then it will read the indicated number of bytes to get the rest of the message. Use special character delimiters for indicating the end of a message. The receiver will scan the incoming stream for a delimiter, and the message comprises everything up to the delimiter. Option 1 is a good choice for very simple protocols. It's easy to implement and it doesn't require any special handling of the received stream. However, it requires the setting up and tearing down of a socket for every message, and this can impact performance when a server is handling many messages at once. Option 2 is again simple to implement, but it only makes efficient use of the network when our data comes in neat, fixed-length blocks. For example in a chat server the message lengths are variable, so we will have to use a special character, such as the null byte, to pad messages to the block size. This only works where we know for sure that the padding character will never appear in the actual message data. There is also the additional issue of how to handle messages longer than the block length. Option 3 is usually considered as one of the best approaches. Although it can be more complex to code than the other options, the implementations are still reasonably straightforward, and it makes efficient use of bandwidth. The overhead imposed by including the length of each message is usually minimal as compared to the message length. It also avoids the need for any additional processing of the received data, which may be needed by certain implementations of option 4. Option 4 is the most bandwidth-efficient option, and is a good choice when we know that only a limited set of characters, such as the ASCII alphanumeric characters, will be used in messages. If this is the case, then we can choose a delimiter character, such as the null byte, which will never appear in the message data, and then the received data can be easily broken into messages as this character is encountered. Implementations are usually simpler than option 3. Although it is possible to employ this method for arbitrary data, that is, where the delimiter could also appear as a valid character in a message, this requires the use of character escaping, which needs an additional round of processing of the data. Hence in these situations, it's usually simpler to use length-prefixing. For our echo and chat applications, we'll be using the UTF-8 character set to send messages. The null byte isn't used in any character in UTF-8 except for the null byte itself, so it makes a good delimiter. Thus, we'll be using method 4 with the null byte as the delimiter to frame our messages. So, our last rule which is number 8 will become: Messages will be encoded in the UTF-8 character set for transmission, and they will be terminated by the null byte. Now, let's write our echo programs. A simple echo server As we work through this article, we'll find ourselves reusing several pieces of code, so to save ourselves from repetition, we'll set up a module with useful functions that we can reuse as we go along. Create a file called tincanchat.py and save the following code in it: import socket   HOST = '' PORT = 4040   def create_listen_socket(host, port):    """ Setup the sockets our server will receive connection requests on """    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)    sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)    sock.bind((host, port))    sock.listen(100)    return sock   def recv_msg(sock):    """ Wait for data to arrive on the socket, then parse into messages using b' ' as message delimiter """    data = bytearray()    msg = ''    # Repeatedly read 4096 bytes off the socket, storing the bytes    # in data until we see a delimiter    while not msg:        recvd = sock.recv(4096)        if not recvd:            # Socket has been closed prematurely            raise ConnectionError()        data = data + recvd        if b' ' in recvd:            # we know from our protocol rules that we only send            # one message per connection, so b' ' will always be            # the last character            msg = data.rstrip(b' ')    msg = msg.decode('utf-8')    return msg   def prep_msg(msg):    """ Prepare a string to be sent as a message """    msg += ' '    return msg.encode('utf-8')   def send_msg(sock, msg):    """ Send a string over a socket, preparing it first """    data = prep_msg(msg)    sock.sendall(data) First we define a default interface and a port number to listen on. The empty '' interface, specified in the HOST variable, tells socket.bind() to listen on all available interfaces. If you want to restrict access to just your machine, then change the value of the HOST variable at the beginning of the code to 127.0.0.1. We'll be using create_listen_socket() to set up our server listening connections. This code is the same for several of our server programs, so it makes sense to reuse it. The recv_msg() function will be used by our echo server and client for receiving messages from a socket. In our echo protocol, there isn't anything that our programs may need to do while they're waiting to receive a message, so this function just calls socket.recv() in a loop until it has received the whole message. As per our framing rule, it will check the accumulated data on each iteration to see if it has received a null byte, and if so, then it will return the received data, stripping off the null byte and decoding it from UTF-8. The send_msg() and prep_msg() functions work together for framing and sending a message. We've separated the null byte termination and the UTF-8 encoding into prep_msg() because we will use them in isolation later on. Handling received data Note that we're drawing ourselves a careful line with these send and receive functions as regards string encoding. Python 3 strings are Unicode, while the data that we receive over the network is bytes. The last thing that we want to be doing is handling a mixture of these in the rest of our program code, so we're going to carefully encode and decode the data at the boundary of our program, where the data enters and leaves the network. This will ensure that any functions in the rest of our code can assume that they'll be working with Python strings, which will later on make things much easier for us. Of course, not all the data that we may want to send or receive over a network will be text. For example, images, compressed files, and music, can't be decoded to a Unicode string, so a different kind of handling is needed. Usually this will involve loading the data into a class, such as a Python Image Library (PIL) image for example, if we are going to manipulate the object in some way. There are basic checks that could be done here on the received data, before performing full processing on it, to quickly flag any problems with the data. Some examples of such checks are as follows: Checking the length of the received data Checking the first few bytes of a file for a magic number to confirm a file type Checking values of higher level protocol headers, such as the Host header in an HTTP request This kind of checking will allow our application to fail fast if there is an obvious problem. The server itself Now, let's write our echo server. Open a new file called 1.1-echo-server-uni.py and save the following code in it: import tincanchat   HOST = tincanchat.HOST PORT = tincanchat.PORT   def handle_client(sock, addr):    """ Receive data from the client via sock and echo it back """    try:        msg = tincanchat.recv_msg(sock) # Blocks until received                                          # complete message        print('{}: {}'.format(addr, msg))        tincanchat.send_msg(sock, msg) # Blocks until sent    except (ConnectionError, BrokenPipeError):        print('Socket error')    finally:        print('Closed connection to {}'.format(addr))        sock.close()   if __name__ == '__main__':    listen_sock = tincanchat.create_listen_socket(HOST, PORT)    addr = listen_sock.getsockname()    print('Listening on {}'.format(addr))      while True:        client_sock, addr = listen_sock.accept()        print('Connection from {}'.format(addr))        handle_client(client_sock, addr) This is about as simple as a server can get! First, we set up our listening socket with the create_listen_socket() call. Second, we enter our main loop, where we listen forever for incoming connections from clients, blocking on listen_sock.accept(). When a client connection comes in, we invoke the handle_client() function, which handles the client as per our protocol. We've created a separate function for this code, partly to keep the main loop tidy, and partly because we'll want to reuse this set of operations in later programs. That's our server, now we just need to make a client to talk to it. A simple echo client Create a file called 1.2-echo_client-uni.py and save the following code in it: import sys, socket import tincanchat   HOST = sys.argv[-1] if len(sys.argv) > 1 else '127.0.0.1' PORT = tincanchat.PORT   if __name__ == '__main__':    while True:        try:            sock = socket.socket(socket.AF_INET,                                  socket.SOCK_STREAM)            sock.connect((HOST, PORT))            print('nConnected to {}:{}'.format(HOST, PORT))            print("Type message, enter to send, 'q' to quit")            msg = input()            if msg == 'q': break            tincanchat.send_msg(sock, msg) # Blocks until sent            print('Sent message: {}'.format(msg))            msg = tincanchat.recv_msg(sock) # Block until                                            # received complete                                              # message            print('Received echo: ' + msg)        except ConnectionError:            print('Socket error')            break        finally:            sock.close()            print('Closed connection to servern') If we're running our server on a different machine from the one on which we are running the client, then we can supply the IP address or the hostname of the server as a command line argument to the client program. If we don't, then it will default to trying to connect to the localhost. The third and forth lines of the code check the command line arguments for a server address. Once we've determined which server to connect to, we enter our main loop, which loops forever until we kill the client by entering q as a message. Within the main loop, we first create a connection to the server. Second, we prompt the user to enter the message to send and then we send the message using the tincanchat.send_msg() function. We then wait for the server's reply. Once we get the reply, we print it and then we close the connection as per our protocol. Give our client and server a try. Run the server in a terminal by using the following command: $ python 1.1-echo_server-uni.py Listening on ('0.0.0.0', 4040) In another terminal, run the client and note that you will need to specify the server if you need to connect to another computer, as shown here: $ python 1.2-echo_client.py 192.168.0.7 Type message, enter to send, 'q' to quit Running the terminals side by side is a good idea, because you can simultaneously see how the programs behave. Type a few messages into the client and see how the server picks them up and sends them back. Disconnecting with the client should also prompt a notification on the server. Concurrent I/O If you're adventurous, then you may have tried connecting to our server using more than one client at once. If you tried sending messages from both of them, then you'd have seen that it does not work as we might have hoped. If you haven't tried this, then give it a go. A working echo session on the client should look like this: Type message, enter to send. 'q' to quit hello world Sent message: hello world Received echo: hello world Closed connection to server However, when trying to send a message by using a second connected client, we'll see something like this: Type message, enter to send. 'q' to quit hello world Sent message: hello world The client will hang when the message is sent, and it won't get an echo reply. You may also notice that if we send a message by using the first connected client, then the second client will get its response. So, what's going on here? The problem is that the server can only listen for the messages from one client at a time. As soon as the first client connects, the server blocks at the socket.recv() call in tincanchat.recv_msg(), waiting for the first client to send a message. The server isn't able to receive messages from other clients while this is happening and so, when another client sends a message, that client blocks too, waiting for the server to send a reply. This is a slightly contrived example. The problem in this case could easily be fixed in the client end by asking the user for an input before establishing a connection to the server. However in our full chat service, the client will need to be able to listen for messages from the server while simultaneously waiting for user input. This is not possible in our present procedural setup. There are two solutions to this problem. We can either use more than one thread or process, or use non-blocking sockets along with an event-driven architecture. We're going to look at both of these approaches, starting with multithreading. Multithreading and multiprocessing Python has APIs that allow us to write both multithreading and multiprocessing applications. The principle behind multithreading and multiprocessing is simply to take copies of our code and run them in additional threads or processes. The operating system automatically schedules the threads and processes across available CPU cores to provide fair processing time allocation to all the threads and processes. This effectively allows a program to simultaneously run multiple operations. In addition, when a thread or process blocks, for example, when waiting for IO, the thread or process can be de-prioritized by the OS, and the CPU cores can be allocated to other threads or processes that have actual computation to do. Here is an overview of how threads and processes relate to each other: Threads exist within processes. A process can contain multiple threads but it always contains at least one thread, sometimes called the main thread. Threads within the same process share memory, so data transfer between threads is just a case of referencing the shared objects. Processes do not share memory, so other interfaces, such as files, sockets, or specially allocated areas of shared memory, must be used for transferring data between processes. When threads have operations to execute, they ask the operating system thread scheduler to allocate them some time on a CPU, and the scheduler allocates the waiting threads to CPU cores based on various parameters, which vary from OS to OS. Threads in the same process may run on separate CPU cores at the same time. Although two processes have been displayed in the preceding diagram, multiprocessing is not going on here, since the processes belong to different applications. The second process is displayed to illustrates a key difference between Python threading and threading in most other programs. This difference is the presence of the GIL. Threading and the GIL The CPython interpreter (the standard version of Python available for download from www.python.org) contains something called the Global Interpreter Lock (GIL). The GIL exists to ensure that only a single thread in a Python process can run at a time, even if multiple CPU cores are present. The reason for having the GIL is that it makes the underlying C code of the Python interpreter much easier to write and maintain. The drawback of this is that Python programs using multithreading cannot take advantage of multiple cores for parallel computation. This is a cause of much contention; however, for us this is not so much of a problem. Even with the GIL present, threads that are blocking on I/O are still de-prioritized by the OS and put into the background, so threads that do have computational work to do can run instead. The following figure is a simplified illustration of this: The Waiting for GIL state is where a thread has sent or received some data and so is ready to come out of the blocking state, but another thread has the GIL, so the ready thread is forced to wait. In many network applications, including our echo and chat servers, the time spent waiting on I/O is much higher than the time spent processing data. As long as we don't have a very large number of connections (a situation we'll discuss later on when we come to event driven architectures), thread contention caused by the GIL is relatively low, and hence threading is still a suitable architecture for these network server applications. With this in mind, we're going to use multithreading rather than multiprocessing in our echo server. The shared data model will simplify the code that we'll need for allowing our chat clients to exchange messages with each other, and because we're I/O bound, we don't need processes for parallel computation. Another reason for not using processes in this case is that processes are more "heavyweight" in terms of the OS resources, so creating a new process takes longer than creating a new thread. Processes also use more memory. One thing to note is that if you need to perform an intensive computation in your network server application (maybe you need to compress a large file before sending it over the network), then you should investigate methods for running this in a separate process. Because of quirks in the implementation of the GIL, having even a single computationally intensive thread in a mainly I/O bound process when multiple CPU cores are available can severely impact the performance of all the I/O bound threads. For more details, go through the David Beazley presentations linked to in the following information box: Processes and threads are different beasts, and if you're not clear on the distinctions, it's worthwhile to read up. A good starting point is the Wikipedia article on threads, which can be found at http://en.wikipedia.org/wiki/Thread_(computing). A good overview of the topic is given in Chapter 4 of Benjamin Erb's thesis, which is available at http://berb.github.io/diploma-thesis/community/. Additional information on the GIL, including the reasoning behind keeping it in Python can be found in the official Python documentation at https://wiki.python.org/moin/GlobalInterpreterLock. You can also read more on this topic in Nick Coghlan's Python 3 Q&A, which can be found at http://python-notes.curiousefficiency.org/en/latest/python3/questions_and_answers.html#but-but-surely-fixing-the-gil-is-more-important-than-fixing-unicode. Finally, David Beazley has done some fascinating research on the performance of the GIL on multi-core systems. Two presentations of importance are available online. They give a good technical background, which is relevant to this article. These can be found at http://pyvideo.org/video/353/pycon-2010--understanding-the-python-gil---82 and at https://www.youtube.com/watch?v=5jbG7UKT1l4. A multithreaded echo server A benefit of the multithreading approach is that the OS handles the thread switches for us, which means we can continue to write our program in a procedural style. Hence we only need to make small adjustments to our server program to make it multithreaded, and thus, capable of handling multiple clients simultaneously. Create a new file called 1.3-echo_server-multi.py and add the following code to it: import threading import tincanchat   HOST = tincanchat.HOST PORT = tincanchat.PORT   def handle_client(sock, addr):    """ Receive one message and echo it back to client, then close        socket """    try:        msg = tincanchat.recv_msg(sock) # blocks until received                                          # complete message        msg = '{}: {}'.format(addr, msg)        print(msg)        tincanchat.send_msg(sock, msg) # blocks until sent    except (ConnectionError, BrokenPipeError):        print('Socket error')    finally:        print('Closed connection to {}'.format(addr))        sock.close()   if __name__ == '__main__':    listen_sock = tincanchat.create_listen_socket(HOST, PORT)    addr = listen_sock.getsockname()    print('Listening on {}'.format(addr))      while True:        client_sock,addr = listen_sock.accept()        # Thread will run function handle_client() autonomously        # and concurrently to this while loop        thread = threading.Thread(target=handle_client,                                  args=[client_sock, addr],                                  daemon=True)        thread.start()        print('Connection from {}'.format(addr)) You can see that we've just imported an extra module and modified our main loop to run our handle_client() function in separate threads, rather than running it in the main thread. For each client that connects, we create a new thread that just runs the handle_client() function. When the thread blocks on a receive or send, the OS checks the other threads to see if they have come out of a blocking state, and if any have, then it switches to one of them. Notice that we have set the daemon argument in the thread constructor call to True. This will allow the program to exit if we hit ctrl - c without us having to explicitly close all of our threads first. If you try this echo server with multiple clients, then you'll see that a second client that connects and sends a message will immediately get a response. Summary We looked at how to develop network protocols while considering aspects such as the connection sequence, framing of the data on the wire, and the impact these choices will have on the architecture of the client and server programs. We worked through different architectures for network servers and clients, demonstrating the multithreaded models by writing a simple echo server. Resources for Article: Further resources on this subject: Importing Dynamic Data [article] Driving Visual Analyses with Automobile Data (Python) [article] Preparing to Build Your Own GIS Application [article]
Read more
  • 0
  • 0
  • 10661

article-image-extending-applications-tcl
Packt
06 Jul 2010
12 min read
Save for later

Extending Applications in Tcl

Packt
06 Jul 2010
12 min read
(For more resources on Tcl, see here.) Very often applications need to be extended with additional functionalities. In many cases it is also necessary to run different code on various types of machines. A good solution to this problem is to introduce extensibility to our application. We'll want to allow additional modules to be deployed to specified clients—this means delivering modules to clients, keeping them up to date, and handling loading and unloading of modules. The following sections will introduce how Starkits and Tcl's VFS can be used to easily create modules for our application, and how these can be used to deploy additional code. We'll also introduce a simple implementation of handling modules, both on server side and client side. This requires creating code for downloading modules themselves, loading them, and the mechanism for telling clients what actions they should be performing. The server will be responsible for telling clients when they should load or unload any of the modules. They will also inform clients if any of the modules are out of date. Clients will use this information to make sure modules that are loaded are consistent, with what server expects. The following steps will be performed by the client: The client sends a list of loaded modules and a list of all available modules along with their MD5 checksums to server The server responds with the list of modules to load, unload, and which need to be downloaded The client downloads modules that it needs to download; if the client already has downloaded all modules it needs, no action is taken The client unloads modules that it should unload and loads modules that it should load; if the client has already loaded all modules it should have, no action is taken Handling security is an additional consideration when creating pluggable applications. Clients should check if files are authentic whenever they download any code, which will be run on the client. This can be achieved using SSL and Certificate Authorities. Starkits as extensions One of the features Tcl offers is its VFS and using single file archives for staging entire archives. We've used these technologies to create our clients as single file executables. Now we'll reuse similar mechanisms to create modules for our application. All modules will simply be a Starkit VFS—this will allow embedding all types of files, creating complex scripts, and easily manage whatever a module contains. Our modules will have their MD5 checksum calculated, similar to the automatic update feature. This will allow the application to easily check whether a module needs to be updated or not. In order to deliver modules to a client, the server will allow downloading modules, similar to retrieving binaries for automatic update. Building modules Our modules will be built similar to how binaries were built, with a minor exception. Modules will have their MD5 checksum stored as the first 32 bytes of the file; this does not cause problems for MK4 VFS package and allows easy checking of whether a package is up to date or not. The source code for each module will be stored in the mod-<modulename> directories. For the purpose of this implementation, clients will source the load.tcl script when loading the package and will source the unload.tcl script when unloading it. It will be up to the module to initialize and finalize itself properly. The directory and file structure in the following screenshot show how files for modules are laid out. It is based on previous example and only the mod-comm and mod-helloworld directories are new. We'll need to modify the build.tcl script and the add code responsible for building of modules. We can do this after the initial binaries have been built. First let's create the modules directory: file mkdir modules Then we'll iterate over each module to build and create name of the target file: foreach module {helloworld comm} { set modfile [file join modules $module.kit] W e'll start off by creating a 32 byte file: set fh [open $modfile w]fconfigure $fh -translation binaryputs -nonewline $fh [string repeat " " 32]close $fh Next we'll create the VFS, copy the source code of the module, and unmount it: vfs::mk4::Mount $modfile $modfile docopy mod-$module $modfile vfs::unmount $modfile Now, calculate the MD5 checksum of the newly created module: set md5 [md5::md5 -hex -file $modfile] And set the first 32 bytes to the checksum: set fh [open $modfile r+] fconfigure $fh -translation binary seek $fh 0 start puts -nonewline $fh $md5 close $fh } We'll create two modules—helloworld and comm. The first one will simply log "Hello world!" every 30 seconds. The second one will set up a comm interface that can be used for testing and debugging purposes. Let's start with creating the mod-helloworld/load.tcl script which will be responsible for initializing the helloworld module: csa::log::info "Loading helloworld module"namespace eval helloworld {}proc helloworld::hello {} { csa::log::info "Hello world!" after cancel helloworld::hello after 30000 helloworld::hello}helloworld::hello Our module will log the information that it has been loaded ,and write out hello world to the log. The helloworld::hello command also schedules itself to be run every 30 seconds. The mod-helloworld/unload.tcl script that cleans up the module looks like this: csa::log::info "Unloading helloworld module"after cancel helloworld::hellonamespace delete helloworld This will log information about the unloading of a module, cancel the next invocation of the helloworld::hello command, and remove the entire helloworld namespace. Implementing the comm module is also simple. The mod-comm/load.tcl script is as follows: csa::log::info "Loading comm module"package require commcomm::comm configure -port 1992 -listen 1 -local 1 This script simply loads the comm package, sets it up to listen on port 1992, and only accepts connections on the local interface. Unloading the package (in mod-comm/load.tcl) will configure the comm interface not to listen for incoming connections: csa::log::info "Unloading comm module"comm::comm configure -listen 0 As the comm package cannot be simply unloaded, the best solution is for load.tcl to configure it to listen for connections and unload.tcl to disable listening. Server side The server side of extensibility needs to perform several activities. First of all we need to track which clients should be using which modules. The second function is providing clients with modules to download. The third functionality is telling clients which modules they need to fetch from the server, and which ones that they need to load or unload from the environment. Let's start off with adding initialization of the modules directory to our server. We need to add it to src-server/main.tcl: set csa::binariesdirectory [file join [pwd] binaries]set csa::modulesdirectory [file join [pwd] modules]set csa::datadirectory [file join [pwd] data] We'll also need to load an additional script for handling this functionality: csa::log::debug "Sourcing remaining files"source [file join $starkit::topdir commapi.tcl]source [file join $starkit::topdir database.tcl]source [file join $starkit::topdir clientrequest.tcl]source [file join $starkit::topdir autoupdate.tcl]source [file join $starkit::topdir clientmodules.tcl] Next we'll also need to modify src-server/database.tcl to add support for storing the modules list. We'll need to add a new table definition to script that creates all tables: CREATE TABLE clientmodules ( client CHAR(36) NOT NULL, module VARCHAR(255) NOT NULL ); In order to work on the data we'll also need commands to add or remove a module for a specified client: proc csa::setClientModule {client module enabled} { if {[llength [db eval {SELECT guid FROM clients WHERE guid=$client AND status=1}]] == 0} { return false } db eval {DELETE FROM clientmodules WHERE client=$client AND module=$module} if {$enabled} { db eval {INSERT INTO clientmodules (client, module) VALUES($client, $module)} } return true} Our command starts off by checking if a client exists and returns immediately if it does not. In the next step, we delete any existing entries and insert a new row if we've been asked to enable a particular module for a specified client. We'll also need to be able to list modules associated with a particular client, which means executing a simple SQL query to list modules: proc csa::getClientModules {client} { return [lsort [db eval {SELECT module FROM clientmodules WHERE client=$client}]]} Then we'll need to create the clientmodules.tcl file that will have functionality related to handling modules, and providing them to clients. The first thing required is a function to read MD5 checksums from modules. We'll first check if the file exists and return an empty string if it does not, otherwise we'll read and return the first 32 bytes of the file: proc csa::getModuleMD5 {name} { variable modulesdirectory set filename [file join $modulesdirectory $name] if {![file exists $filename]} { return "" } set fh [open $filename r] fconfigure $fh -translation binary set md5 [read $fh 32] close $fh return $md5} Next we'll create a function that takes a client identifier, queries the database for modules for that client and returns a list of module-md5sum pairs, which can be treated as a dictionary – where the key is the module name and the value is its md5 checksum. proc csa::getClientModulesMD5 {client} { set rc [list] foreach module [getClientModules $client] { lappend rc $module [getModuleMD5 $module] } return $rc} Another function will handle requests for a particular module. Similar to how it is implemented for automatic updates, we'll provide files only from a single directory and handle cases where a file does not exist, and register a prefix for the requests in TclHttpd: proc csa::handleClientModule {sock suffix} { variable modulesdirectory set filename [file join $modulesdirectory [file tail $suffix]] log::debug "handleClientModule: File name: $filename" if {[file exists $filename]} { Httpd_ReturnFile $sock application/octet-stream $filename } else { log::warn "handleClientModule: $filename not found" Httpd_Error $sock 404 }}Url_PrefixInstall /client/module csa::handleClientModule For communication with the clients, we'll reuse the protocol for requesting jobs. We also need a function that given a client identifier, request dictionary, and name of the response variable will provide information to the client. It will also return whether a client should be provided with a list of jobs or not. If a client will need to download new or updated modules first, we do not need to provide a list of jobs as the client will need to have updated modules first. Let's start by making sure that both the modules available on the client and the list of loaded modules has been sent to the client: proc csa::csaHandleClientModules {guid req responsevar} { upvar 1 $responsevar response set ok true if {[dict exists $req availableModules] && [dict exists $req loadedModules]} { Then we copy the values to local variables for convenience, and get a list of modules that a client should have along with their MD5 checksums. set rAvailable [dict get $req availableModules] set rLoaded [dict get $req loadedModules] set lAvailable [getClientModulesMD5 $guid] We'll also create a list of actions we want to pass back to the client— the list of modules it needs to download and the list of modules to load and unload. By default, all lists are empty and we'll add items only if we detect that the client should perform actions: set downloadList [list] set loadList [list] set unloadList [list] As the first step, we'll iterate over the modules that the client should have and check if it has them—if the client either does not have a module or its checksum differs, we'll tell the client to download it. foreach {module md5} $lAvailable { if {(![dict exists $rAvailable $module]) || ([dict get $rAvailable $module] != $md5)} { lappend downloadList $module } After this we check if the client has already loaded this module. If not, we'll tell him to load the module. if {[lsearch -exact $rLoaded $module] < 0} { lappend loadList $module } } We'll also iterate over the modules that the client currently has loaded and if any of them should not be loaded according to our list, we'll tell the client to unload it. foreach module $rLoaded { if {![dict exists $lAvailable $module]} { lappend unloadList $module } } Once we've conducted our comparison, we tell the agent what should be done. If he needs to download any modules, we only return this information and return that there is no point in providing the list of jobs to perform: if {[llength $downloadList] > 0} { dict set response moduleDownload $downloadList set ok false } else { Otherwise if all modules on the client are updated, we provide a list of modules to load or unload if this is needed. if {[llength $loadList] > 0} { dict set response moduleLoad $loadList } if {[llength $unloadList] > 0} { dict set response moduleUnload $unloadList } } Finally, we return whether or not we should provide the client with a list of jobs to perform or not: } return $ok} Now we'll need to modify the csa::handleClientProtocol command in the src-server/clientrequest.tcl file to invoke our newly created csa::csaHandleClientModules command: if {[csaHandleClientModules $guid $req response]} { # only specify jobs if client # has all the modules if {[dict exists $req joblimit]} { set joblimit [dict get $req joblimit] } else { set joblimit 10 } dict set response jobs [getJobs $guid $joblimit] log::debug "handleClientProtocol: Jobs: [llength [dict get $response jobs]]"} This will cause jobs to be added only if the csaHandleClientModules command returned true. We can also modify the csa::apihandle command in the src-server/commapi.tcl file to allow adding or removing a module from a client. The following needs to be added inside the main switch responsible for handling commands: switch -- $cmd { addClientModule { lassign $command cmd client module return [setClientModule $client $module 1] } removeClientModule { lassign $command cmd client module return [setClientModule $client $module 0] } These commands simply invoke the csa::setClientModule command created earlier.
Read more
  • 0
  • 0
  • 10618

article-image-whats-new-ubuntu-910-karmic-koala
Packt
19 Nov 2009
5 min read
Save for later

What's New In Ubuntu 9.10 "Karmic Koala"

Packt
19 Nov 2009
5 min read
Upstart The first new technology that I would like to outline is called Upstart. I thought it was fitting to outline this feature first because it is integral within the boot process. Without the improvements in Upstart, Ubuntu would not be able to boot as fast as it currently does. Upstart has been used, incrementally, in Ubuntu since version 6.10 but with Ubuntu 9.10 it has made the transition complete. Without going into too much detail, Upstart was designed to replace the aging System-V init system that is commonly found on Linux distributions. The idea behind Upstart is that modern systems are more dynamic and event-driven, as opposed to static and pre-defined, and the boot process should make use of that. With the previous system, System-V, each service that is started at boot-time was defined an ordered number in which to start. This has worked well enough for many years, but it can cause problems for maintainers as they have to make sure that the boot order of services is globally compatible. For example, networking needs to be enabled before network services are enabled. If these (as a simple example) get out of order, services will not be available as expected after the machine has booted. Upstart takes the simple idea that certain services rely on other services and redefines them into event-driven tasks. It is very exciting news that Ubuntu has finally completed the transition to Upstart after so many releases. This is a big step toward improving bootup performance on Ubuntu 9.10. You can read much more about Upstart at http://upstart.ubuntu.com. XSplash Ubuntu has also made another big change to the boot process with XSplash. XSplash is replacing the previous USplash, which was known to cause issues. I have noticed that XSplash seems faster, as well as addressing the compatibility issues caused by its predecessor. I think you'll also enjoy the new bootup graphic. This is another step towards Ubuntus goal of a ten-second boot process by Ubuntu 10.04, which is due out in April of 2010. While both Upstart and XSplash contribute to improved boot performance all other changes should be transparent to the end-user. All other boot related services should perform as expected, with no migration or customization on the user's part. Linux Kernel: 2.6.31 Ubuntu 9.10 "Karmic Koala" has also upgraded the Linux Kernel to version 2.6.31. This version ships with Kernel Mode Settings enabled for Intel graphics cards as well as some impressive security features. Kernel mode-setting (KMS) shifts responsibility for selecting and setting up the graphics mode from the X window system to the Linux Kernel itself. When X is started, it then detects and uses the mode without any further mode changes. This promises to make booting faster, improves graphical performance and reduces screen flickering. In regards to security features, Ubuntu 9.10 enables non-exec memory in this latest version of the Linux Kernel. What does this mean? Most modern CPUs protect against executing non-executable memory regions such as heap or stacks, but require that the Linux Kernel use "PAE" addressing. This is known either as Non-eXecute (NX) or eXecute-Disable (XD). This is the default for 64bit and generic-pae kernels and this protection reduces the areas an attacker can use to perform arbitrary code execution. The protection is now partially emulated on 32-bit kernels without PAE starting in Ubuntu 9.10. In addition, Ubuntu 9.10 has also made it possible to disable the loading of any additional kernel modules once the system is running. This adds yet another layer of protections against attackers loading kernel rootkits. This feature can be enabled by setting the value of /proc/sys/kernel/modules_disabled to 1. With these security and performance additions in the 2.6.31 version of the Linux Kernel, Ubuntu promises to become a better contender on both the Desktop and the Server environments! EXT4 Filesystem The previous version of Ubuntu, version 9.04, offered the ext4 filesystem as an option, but not as a default. After six-months of testing and stabilization I am also happy to announce that ext4 will be enabled by default in Ubuntu 9.10. I have been very happy with the ext4 filesystem. I have seen impressive speed improvements over ext3, and now use ext4 on each of my systems that supports it. Again, another impressive step toward a faster and more performance-driven Ubuntu experience. AppArmor The AppArmor system in Ubuntu 9.10 features an improved parser engine that uses cache files. This greatly improves the time taken to initialize AppArmor at boot time. AppArmor also now supports 'pux' which, when specified, means a process can transition to an existing profile if one exists or simply run unconfined if not. If you're not familiar with AppArmor, it is a Mandatory Access Control application originally designed at Novell. It is now primarily community-driven, but has been the default in Ubuntu for a few releases. It continues to mature, and security profiles are pre-defined and applicable for many common applications. To find out more about AppArmor you can read the Ubuntu community documentation on using it at: https://help.ubuntu.com/community/AppArmor
Read more
  • 0
  • 0
  • 10608
Packt
11 Aug 2015
17 min read
Save for later

Ext JS 5 – an Introduction

Packt
11 Aug 2015
17 min read
In this article by Carlos A. Méndez, the author of the book Learning Ext JS - Fourth Edition, we will see some of the important features in Ext JS. When learning a new technology such as Ext JS, some developers face a hard time to begin with, so this article will cover up certain important points that have been included in the recent version of Ext JS. We will be referencing certain online documentations, blogs and forums looking for answers, trying to figure out how the library and all the components work together. Even though there are tutorials in the official learning center, it would be great to have a guide to learn the library from the basics to a more advanced level. Ext JS is a state-of-the-art framework to create Rich Internet Applications (RIAs). The framework allows us to create cross-browser applications with a powerful set of components and widgets. The idea behind the framework is to create user-friendly applications in rapid development cycles, facilitate teamwork (MVC or MVVM), and also have a long-term maintainability. Ext JS is not just a library of widgets anymore; the brand new version is a framework full of new exciting features for us to play with. Some of these features are the new class system, the loader, the new application package, which defines a standard way to code our applications, and much more awesome stuff. The company behind the Ext JS library is Sencha Inc. They work on great products that are based on web standards. Some of the most famous products that Sencha also have are Sencha Touch and Sencha Architect. In this article, we will cover some of the basic concepts of the framework of version 5 and take a look at some of the new features in Ext JS 5. (For more resources related to this topic, see here.) Considering Ext JS for your next project Ext JS is a great library to create RIAs that require a lot of interactivity with the user. If you need complex components to manage your information, then Ext is your best option because it contains a lot of widgets such as the grid, forms, trees, panels, and a great data package and class system. Ext JS is best suited for enterprise or intranet applications; it's a great tool to develop an entire CRM or ERP software solution. One of the more appealing examples is the Desktop sample (http://dev.sencha.com/ext/5.1.0/examples/desktop/index.html). It really looks and feels like a native application running in the browser. In some cases, this is an advantage because the users already know how to interact with the components and we can improve the user experience. Ext JS 5 came out with a great tool to create themes and templates in a very simple way. The framework for creating themes is built on top of Compass and Sass, so we can modify some variables and properties and in a few minutes we can have a custom template for our Ext JS applications. If we want something more complex or unique, we can modify the original template to suit our needs. This might be more time-consuming depending on our experience with Compass and Sass. Compass and Sass are extensions for CSS. We can use expressions, conditions, variables, mixins, and many more awesome things to generate well-formatted CSS. You can learn more about Compass on their website at http://compass-style.org/. The new class system allows us to define classes incredibly easily. We can develop our application using the object-oriented programming paradigm and take advantage of the single and multiple inheritances. This is a great advantage because we can implement any of the available patterns such as MVC, MVVM, Observable, or any other. This will allow us to have a good code structure, which leads us to have easy access for maintenance. Another thing to keep in mind is the growing community around the library; there are lot of people around the world that are working with Ext JS right now. You can even join the meeting groups that have local reunions frequently to share knowledge and experiences; I recommend you to look for a group in your city or create one. The new loader system is a great way to load our modules or classes on demand. We can load only the modules and applications that the user needs just in time. This functionality allows us to bootstrap our application faster by loading only the minimal code for our application to work. One more thing to keep in mind is the ability to prepare our code for deployment. We can compress and obfuscate our code for a production environment using the Sencha Command, a tool that we can run on our terminal to automatically analyze all the dependencies of our code and create packages. Documentation is very important and Ext JS has great documentation, which is very descriptive with a lot of examples, videos, and sample code so that we can see it in action right on the documentation pages, and we can also read the comments from the community. What's new in Ext JS 5 Ext JS 5 introduces a great number of new features, and we'll briefly cover a few of the significant additions in version 5 as follows: Tablet support and new themes: This has introduced the ability to create apps compatible with touch-screen devices (touch-screen laptops, PCs, and tablets). The Crisp theme is introduced and is based on the Neptune theme. Also, there are new themes for tablet support, which are Neptune touch and Crisp touch. New application architecture – MVVM: Adding a new alternative to MVC Sencha called MVVM (which stands for Model-View-ViewModel), this new architecture has data binding and two-way data binding, allowing us to decrease much of the extra code that some of us were doing in past versions. This new architecture introduces: Data binding View controllers View models Routing: Routing provides deep linking of application functionality and allows us to perform certain actions or methods in our application by translating the URL. This gives us the ability to control the application state, which means that we can go to a specific part or a direct link to our application. Also, it can handle multiple actions in the URL. Responsive configurations: Now we have the ability to set the responsiveConfig property (new property) to some components, which will be a configuration object that represents conditions and criteria on which the configurations set will be applied, if the rule meets these configurations. As an example: responsiveConfig: { 'width > 800': { region: 'west' }, 'width <= 800':{ region: 'north' } } Data package improvements: Some good changes came in version 5 relating to data handling and data manipulation. These changes allowed developers an easier journey in their projects, and some of the new things are: Common Data (the Ext JS Data class, Ext.Data, is now part of the core package) Many-to-many associations Chained stores Custom field types Event system: The event logic was changed, and is now a single listener attached at the very top of the DOM hierarchy. So this means when a DOM element fires an event, it bubbles to the top of the hierarchy before it's handled. So Ext JS intercepts this and checks the relevant listeners you added to the component or store. This reduces the number of interactions on the DOM and also gives us the ability to enable gestures. Sencha Charts: Charts can work on both Ext JS and Sencha Touch, and have enhanced performance on tablet devices. Legacy Ext JS 4 charts were converted into a separate package to minimize the conversion/upgrade. In version 5, charts have new features such as: Candlestick and OHLC series Pan, zoom, and crosshair interactions Floating axes Multiple axes SVG and HTML Canvas support Better performance Greater customization Chart themes Tab Panels: Tab panels have more options to control configurations such as icon alignment and text rotation. Thanks to new flexible Sass mixins, we can easily control presentation options. Grids: This component, which has been present since version 2x, is one of the most popular components, and we may call it one of the cornerstones of this framework. In version 5, it got some awesome new features: Components in Cells Buffered updates Cell updaters Grid filters (The popular "UX" (user extension) has been rewritten and integrated into the framework. Also filters can be saved in the component state.) Rendering optimizations Widgets: This is a lightweight component, which is a middle ground between Ext.Component and the Cell renderer. Breadcrumb bars: This new component displays the data of a store (a specific data store for the tree component) in a toolbar form. This new control can be a space saver on small screens or tablets. Form package improvements: Ext JS 5 introduces some new controls and significant changes on others: Tagfield: This is a new control to select multiple values. Segmented buttons: These are buttons with presentation such as multiple selection on mobile interfaces. Goodbye to TriggerField: In version 5, TriggerField is deprecated and now the way to create triggers is by using the Text field and implementing the triggers on the TextField configuration. (TriggerField in version 4 is a text field with a configured button(s) on the right side.)  Field and Form layouts: Layouts were refactored using HTML and CSS, so there is improvement as the performance is now better. New SASS Mixins (http://sass-lang.com/): Several components that were not able to be custom-themed now have the ability to be styled in multiple ways in a single theme or application. These components are: Ext.menu.Menu Ext.form.Labelable Ext.form.FieldSet Ext.form.CheckboxGroup Ext.form.field.Text Ext.form.field.Spinner Ext.form.field.Display Ext.form.field.Checkbox The Sencha Core package: The core package contains code shared between Ext JS and Sencha Touch and in the future, this core will be part of the next major release of Sencha Touch. The Core includes: Class system Data Events Element Utilities Feature/environment detection Preparing for deployment So far, we have seen a few features that helps to architect a JavaScript code; but we need to prepare our application for a production environment. So initially, when an application is in the development environment, we need to make sure that Ext JS classes (also our own classes) are dynamically loaded when the application requires to use them. In this environment, it's really helpful to load each class in a separate file. This will allow us to debug the code easily, and find and fix bugs. Now, before the application is compiled, we must know the three basic parts of an application, as marked here: app.json: This file contains specific details about our application. Also, Sencha CMD processes this file first. build.xml: This file contains a minimal initial Ant script, and imports a task file located at .sencha/app/build-impl.xml. .sencha: This folder contains many files related to, and are to be used for, the build process. The app.json file As we said before, the app.json file contains the information about the settings of the application. Open the file and take a look. We can make changes to this file, such as the theme that our application is going to use. For example, we can use the following line of code: "theme": "my-custom-theme-touch", Alternatively, we can use a normal theme: "theme": "my-custom-theme", We can also use the following for using charts: "requires": [ "sencha-charts" ], This was to specify that we are going to use the charts or draw classes in our application (the chart package for Ext JS 5). Now, at the end of the file, there is an ID for the application: "id": "7833ee81-4d14-47e6-8293-0cb8120281ab" After this ID, we can add other properties. As an example, suppose the application will be generated for Central and South America. Then we need to include the locale (ES or PT), so we can add the following: ,"locales":["es"] We can also add multiple languages: ,"locales":["es","pt","en"] This will cause the compilation process to include the corresponding locale files located at ext/packages/ext-locale/build. However, this article can't cover each property in the file, so it's recommended that you take a deep look into the Sencha CMD documentation at: http://docs-origin.sencha.com/cmd/5.x/microloader.html to learn more about the app.json file. The Sencha command To create our production build, we need to use the Sencha Command. This tool will help us in our purpose. If you are running Sencha CMD on Windows 7 or Windows 8, it's recommended that you run the tool with "administrator privileges". So let's type this in our console tool: [path of my app]\sencha app build In my case (Windows OS 7; 64-bit), I typed: K:\x_extjsdev\app_test\myapp>sencha app build After the command runs, you will see something like this in your console tool: So, let's check out the build folder inside our application folder. We may have the following list of files: Notice that the build process has created these: resources: This file will contain a copy of our resources folder, plus one or more CSS files starting with myApp-all app.js: This file contains all of the necessary JS (Ext JS core classes, components, and our custom application classes) app.json: This is a small manifest file compressed index.html: This file is similar to our index file in development mode, except for the line: <script id="microloader" type="text/javascript" src="bootstrap.js"></script> This was replaced by some compressed JavaScript code, which will act in a similar way to the micro loader. Notice that the serverside folder, where we use some JSON files (other cases can be PHP, ASP, and so on), does not exist in the production folder. Well, the reason is that that folder is not part of what Sencha CMD and build files consider. Normally, many developers will say, "Hey, let's copy the folder and let's move on." However, the good news is that we can include that folder with an Apache Ant task Customizing the build.xml file We can add custom code (Apache Ant style) to perform new tasks and things we need in order to make our application build even better. Let's open the build.xml file. You will see something like this: <?xml version="1.0" encoding="utf-8"?> <project name="myApp" default=".help"> <!-- comments... --> <import file="${basedir}/.sencha/app/build-impl.xml"/> <!-- comments... --> </project> So, let's place the following code before </project>: <target name="-after-build" depends="init"> <copy todir="${build.out.base.path}/serverside" overwrite="false"> <fileset dir="${app.dir}/serverside" includes="**/*"/> </copy> </target> </project> This new code inside the build.xml file establishes that after making the whole building process, if there is no error during the Init process then it will copy the (${app.dir}/ serverside) folder to the (${build.out.base.path}/serverside) output path. So now, let's type the command for building the application again: sencha app build –c In this case, we added -c to first clean the build/production folder and create a new set of files. After the process completes, take a look at the folder contents, and you will see this: Notice that now the serverside folder has been copied to the production build folder, thanks to the custom code we placed in build.xml file. Compressing the code After building our application, let's open the app.js file. We may see something like what is shown here: By default, the build process uses the YUI compressor to compact the JS code (http://yui.github.io/yuicompressor/). Inside the .sencha folder, there are many files, and depending on the type of build we are creating, there are some files such as the base file, where the properties are defined in defaults.properties. This file must not be changed whatsoever; for that, we have other files that can override the values defined in this file. As an example for the production build, we have the following files: production.defaults.properties: This file will contain some properties/variables that will be used for the production build. production.properties: This file has only comments. The idea behind this file is that developers place the variables they want in order to customize the production build. By default, in the production.defaults.properties file, you will see something like the following code: # Comments ...... # more comments...... build.options.logger=no build.options.debug=false # enable the full class system optimizer app.output.js.optimize=true build.optimize=${build.optimize.enable} enable.cache.manifest=true enable.resource.compression=true build.embedded.microloader.compressor=-closure Now, as an example of compression, let's make a change and place some variables inside the production.properties file. The code we will place here will override the properties set in defaults.properties and production.defaults.properties. So, let's write the following code after the comments: build.embedded.microloader.compressor=-closure build.compression.yui=0 build.compression.closure=1 build.compression=-closure With this code, we are setting up the build process to use closure as the JavaScript compressor and also for the micro loader. Now save the file and use the Sencha CMD tool once again: sencha app build Wait for the process to end and take a look at app.js. You can notice that the code is quite different. This is because the code compiler (closure) was the one that made the compression. Run the app and you will notice no change in the behavior and use of the application. As we have used the production.properties file in this example, notice that in the .sencha folder, we have some other files for different environments, such as: Environment File (or files) Testing testing.defaults.properties and testing.properties Development development.defaults.properties and development.properties Production production.defaults.properties and production.properties It's not recommended that you change the *.default.properties file. That's the reason of the *.properties file, so that you can set your own variables, and doing this will override the settings on default file. Packaging and deploying Finally, after we have built our application, we have our production build/package ready to be deployed. We will have the following structure in our folder: Now we have all the files required to make our application work on a public server. We don't need to upload anything from the Ext JS folder because we have all that we need in app.js (all of the Ext JS code and our code). Also, the resources file contains the images, CSS (the theme used in the app), and of course our serverside folder. So now, we need to upload all of the content to the server: And we are ready to test the production in a public server. Summary In this article, you learned the reasons that will make us to consider using Ext JS 5 for developing projects. We briefly mentioned some of the significant additional features in version 5 that are instrumental in developing applications. Later, we talked about compiling and preparing an application for a production environment. Using Sencha CMD and also configuring JSON or XML files to build a project can sometimes be an overwhelming situation, but don't panic! Check out the documentation of Sencha and Apache. Do remember that there's no reason to be afraid of testing and playing with the configurations. It's all part of learning and knowing how to use Sencha Ext JS. Resources for Article: Further resources on this subject: The Login Page using Ext JS [Article] So, what is Ext JS? [Article] AngularJS Performance [Article]
Read more
  • 0
  • 0
  • 9996

article-image-installing-openvpn-linux-and-unix-systems-part-1
Packt
31 Dec 2009
10 min read
Save for later

Installing OpenVPN on Linux and Unix Systems: Part 1

Packt
31 Dec 2009
10 min read
Prerequisites All Linux/Unix systems must meet the following requirements to install OpenVPN successfully: Your system must provide support for the Universal TUN/TAP driver. The kernels newer than version 2.4 of almost all modern Linux distributions provide support for TUN/TAP devices. Only if you are using an old distribution or if you have built your own kernel, will you have to add this support to your configuration. This project's web site can be found at http://vtun.sourceforge.net/tun/. OpenSSL libraries have to be installed on your system. I have never encountered any modern Linux/Unix system that does not meet this requirement. However, if you want to compile OpenVPN from source code, the SSL development package may be necessary. The web site is http://www.openssl.org/. The Lempel-Ziv-Oberhumer (LZO) Compression library has to be installed. Again, most modern Linux/Unix systems provide these packages, so there shouldn't be any problem. LZO is a real-time compression library that is used by OpenVPN to compress data before sending. Packages can be found on http://openvpn.net/download.html, and the web site of this project is http://www.oberhumer.com/opensource/lzo/. Most Linux/Unix systems' installation tools are able to resolve these so-called dependencies on their own, but it might be helpful to know where to get the required software. Most commercial Linux systems, like SuSE, provide installation tools, like Yet another Setup Tool (YaST), and contain up-to-date versions of OpenVPN on their installation media (CD or DVD). Furthermore, systems based on RPM software can also install and manage OpenVPN software at the command line. Linux systems, like Debian, use sophisticated package management tools that can install software that is provided by repositories on web servers. No local media is needed, the package management will resolve potential dependencies by itself, and install the newest and safest possible version of OpenVPN. FreeBSD and other BSD-style systems use their package management tools such as pkg_add or the ports system. Like all open source projects, OpenVPN source code is available for download. These compressed tar.gz or tar.bz2 archives can be downloaded from http://openvpn.net/download.html and unpacked to a local directory. This source code has to be configured and translated (compiled) for your operating system. You can also install unstable, developer, or older versions of OpenVPN from http://openvpn.net/download.html. This may be interesting if you want to test new features of forthcoming versions. Daily (unstable!) OpenVPN source code extracts can be obtained from http://sourceforge.net/cvs/?group_id=48978. Here you find the Concurrent Versions System (CVS) repository, where all OpenVPN developers post their changes to the project files. Installing OpenVPN on SuSE Linux Installing OpenVPN on SuSE Linux is almost as easy as installing under Windows or Mac OS X. Linux users may consider it even easier. On SuSE Linux almost all administrative tasks can be carried out using the administration interface YaST. OpenVPN can be installed completely using this. The people distributing SuSE have always tried to include up-to-date software in their distribution. Thus, the installation media of OpenSuSE 11 already contains version 2.0.9 of OpenVPN, and both the Enterprise editions SLES 10 and the forthcoming SLES 11 that offer five years of support. Updates include up-to-date versions of OpenVPN. Both OpenSuSE and SLES use YaST for installing software. Using YaST to install software Start YaST. Under both GNOME and the K Desktop Environment (KDE—the standard desktop under SuSE Linux), you will find YaST in the main menu under System | YaST, or as an icon on the Desktop. If you are logged in as a normal user, you will be prompted to enter your root password and confirm the same. The YaST control center is started. This administration interface consists of many different modules, which are represented by symbols in the right half of the window and grouped by the labels on the left. After starting YaST, click on the symbol labeled Software Management in the right column to start the software management interface of YaST. The software management tool in YaST is very powerful. Under SuSE, data about the installed and installable software is kept in a database, which can be searched very easily. Select the entry Search in the drop-down list Filter: and enter openvpn in the Search field. YaST will find at least one entry that matches your search value openvpn. Depending on the (online) installation sources that you have configured, various add-ons and tools for OpenVPN will be found. If you chose to add the community repositories like I did on this system, then OpenSuSE will list more than 10 hits. Select the entry openvpn by checking the box besides the entry in the first column. If you want to obtain information about the OpenVPN package, have a look at the lower half of the right side—here you will find the software Description, Technical Data, Dependencies, and more information about the package that you have selected. Click on the Accept button to start the OpenVPN installation. If you installed from a local medium, then put your CD or DVD in your local drive now. YaST will retrieve the OpenVPN files from your installation media. If you have configured your system to use one of the web/FTP servers of SuSE for installation, then this might take a while. The files are unpacked and installed on your system, and YaST updates the configuration. This is managed by the script SuSEconfig and other scripts that are called by it. SuSEconfig and YaST were once very infamous for deleting local configuration created by the local administrator or omitting relevant changes. This problem only occurred when updating and re-installing software that was previously installed. However, the latest SuSE versions have proven very reliable, and the system configuration tools never delete configuration files that you have added manually. Instead, the standard configuration files installed with the new software package may be renamed to <file>.rpmnew or similar, and your configuration is loaded. During installation, SuSEconfig calls several helper scripts and updates your configuration, and informs you of the progress in a separate window. After successful software installation, you are prompted if you want to install more packages or exit the installation. Click on the Finish button. The Novell/OpenSuSE teams have added a very handy tool called zypper to their package management. From version 10.1 onwards, you can simply install software from a root console by typing zypper in openvpn. Of course this only works if you know the exact name of the package that you want to install. If not, then you will have to search for it, for example, by using zypper search vpn. Installing OpenVPN on Red Hat Fedora using yum If you are using Red Hat Fedora, the Yellow dog Updater, Modified (yum) is probably the easiest way to install software. It can be found on http://linux.duke.edu/projects/yum/, and provides many interesting features, such as automatic updates, solving dependency problems, and managing installation of software packages. Even though OpenVPN installation on Fedora can only be done on the command line, it still is a very easy task. The installation makes use of the commands wget, rpm, and yum. wget: A command-line download manager suitable for ftp or http downloads. rpm: The Red Hat Package Manager is a software management system used by distributions like SuSE or Red Hat. It keeps track of changes and can solve dependencies between programs. yum: This provides a simple installation program for RPM-based software. To use yum, you have to adapt its configuration file as follows: Log in as administrator (root). Change to Fedora's configuration directory /etc. Save the old, probably original, configurations file yum.conf by renaming or moving it. You can use commands such as mv yum.conf yum.conf_fedora_org to accomplish this. The web site http://www.fedorafaq.org/ provides a suitable configuration file for yum. Download the file http://www.fedorafaq.org/samples/yum.conf using wget. The command-line syntax is: wget http://www.fedorafaq.org/samples/yum.conf At the same web site a sophisticated yum configuration is available for downloading. Install this as well: rpm -Uvh http://www.fedorafaq.org/yum The following excerpt shows the output of these five steps on the system: [root@fedora ~]# cd /etc [root@fedora etc]# mv yum.conf yum.conf.org [root@fedora etc]# wget http://www.fedorafaq.org/samples/yum.conf --11:33:25-- http://www.fedorafaq.org/samples/yum.conf => `yum.conf' Resolving www.fedorafaq.org... 70.84.209.18 Connecting to www.fedorafaq.org[70.84.209.18]:80... connected. HTTP request sent, awaiting response... 200 OK Length: 595 [text/plain] 100%[========================================================== ================================================>] 595 --.- -K/s 11:33:25 (405.20 KB/s) - `yum.conf' saved [595/595] [root@fedora etc]# rpm -Uvh http://www.fedorafaq.org/yum Retrieving http://www.fedorafaq.org/yum Preparing... ######################################### ## [100%] 1:yum-fedorafaq ######################################### ## [100%] [root@fedora etc]# The rest of the OpenVPN installation is very simple. Just enter yum install openvpn in your root shell. Now yum will start and give you a lot of output. We will have a short look at the things yum does. [root@fedora ~]#yum install openvpn Setting up Install Process Setting up repositories livna 100% |=========================| 951 B 00:00 updates-released 100% |=========================| 951 B 00:00 base 100% |=========================| 1.1 kB 00:00 extras 100% |=========================| 1.1 kB 00:00 Reading repository metadata in from local files primary.xml.gz 100% |=========================| 127 kB 00:00 livna : ################################################## 380/380 Added 380 new packages, deleted 0 old in 1.36 seconds primary.xml.gz 100% |=========================| 371 kB 00:00 updates-re: ################################################## 1053/1053 Added 0 new packages, deleted 13 old in 0.93 seconds yum has set up the installation process and integrated online repositories for the installation of software. This feature is the reason why Fedora does not need a URL source for installing OpenVPN. The repository metadata contains information about location, availability, and dependencies between packages. Resolving the dependencies is the next step. Parsing package install arguments Resolving Dependencies --> Populating transaction set with selected packages. Please wait. ---> Downloading header for openvpn to pack into transaction set. openvpn-2.0.9-1.fc5.i386. 100% |=========================| 18 kB 00:00 ---> Package openvpn.i386 0:2.0.9-1.fc5 set to be updated --> Running transaction check --> Processing Dependency: liblzo.so.1 for package: openvpn --> Restarting Dependency Resolution with new changes. --> Populating transaction set with selected packages. Please wait. ---> Downloading header for lzo to pack into transaction set. lzo-1.08-4.i386.rpm 100% |=========================| 3.2 kB 00:00 ---> Package lzo.i386 0:1.08-4 set to be updated --> Running transaction check Dependencies Resolved OpenVPN needs the LZO library for installation, and yum is about to resolve this dependency. As a next step, yum tests whether this library has unresolved dependencies. If this is not the case, we are presented with an overview of the packages to be installed. Confirm by entering y and press the Enter key. yum will start downloading the required packages. If the RPM process that yum is using to install the software packages encounters a missing encryption key, then confirm the import of this key from http://www.fedoraproject.org by entering y and pressing the Enter key. This GPG key is used to control the authenticity of the packages selected for installation. Key imported successfully Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing: lzo ######################### [1/2] Installing: openvpn ######################### [2/2] Installed: openvpn.i386 0:2.0.9-1.fc5 Dependency Installed: lzo.i386 0:1.08-4 Complete! [root@fedora etc]# That's all! yum has been downloaded, checked, and has installed OpenVPN and the LZO libraries.
Read more
  • 0
  • 0
  • 9773

article-image-automating-your-system-administration-and-deployment-tasks-over-ssh
Packt
20 Mar 2014
7 min read
Save for later

Automating Your System Administration and Deployment Tasks Over SSH

Packt
20 Mar 2014
7 min read
(For more resources related to this topic, see here.) Executing a remote shell command using telnet If you need to connect an old network switch or router via telnet, you can do so from a Python script instead of using a bash script or an interactive shell. This recipe will create a simple telnet session. It will show you how to execute shell commands to the remote host. Getting ready You need to install the telnet server on your machine and ensure that it's up and running. You can use a package manager that is specific to your operating system to install the telnet server package. For example, on Debian/Ubuntu, you can use apt-get or aptitude to install the telnetd package, as shown in the following command: $ sudo apt-get install telnetd $ telnet localhost How to do it... Let us define a function that will take a user's login credentials from the command prompt and connect to a telnet server. Upon successful connection, it will send the Unix 'ls' command. Then, it will display the output of the command, for example, listing the contents of a directory. Listing 7.1 shows the code for a telnet session that executes a Unix command remotely as follows: #!/usr/bin/env python # Python Network Programming Cookbook -- Chapter - 7 # This program is optimized for Python 2.7. # It may run on any other version with/without modifications. import getpass import sys import telnetlib     def run_telnet_session():   host = raw_input("Enter remote hostname e.g. localhost:")   user = raw_input("Enter your remote account: ")   password = getpass.getpass()     session = telnetlib.Telnet(host)     session.read_until("login: ")   session.write(user + "n")   if password:     session.read_until("Password: ")     session.write(password + "n")       session.write("lsn")   session.write("exitn")     print session.read_all()   if __name__ == '__main__':   run_telnet_session() If you run a telnet server on your local machine and run this code, it will ask you for your remote user account and password. The following output shows a telnet session executed on a Debian machine: $ python 7_1_execute_remote_telnet_cmd.py Enter remote hostname e.g. localhost: localhost Enter your remote account: faruq Password:   ls exit Last login: Mon Aug 12 10:37:10 BST 2013 from localhost on pts/9 Linux debian6 2.6.32-5-686 #1 SMP Mon Feb 25 01:04:36 UTC 2013 i686   The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright.   Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. You have new mail. faruq@debian6:~$ ls       down              Pictures               Videos Downloads         projects               yEd Dropbox           Public env               readme.txt faruq@debian6:~$ exit logout How it works... This recipe relies on Python's built-in telnetlib networking library to create a telnet session. The run_telnet_session() function takes the username and password from the command prompt. The getpass module's getpass() function is used to get the password as this function won't let you see what is typed on the screen. In order to create a telnet session, you need to instantiate a Telnet() class, which takes a hostname parameter to initialize. In this case, localhost is used as the hostname. You can use the argparse module to pass a hostname to this script. The telnet session's remote output can be captured with the read_until() method. In the first case, the login prompt is detected using this method. Then, the username with a new line feed is sent to the remote machine by the write() method (in this case, the same machine accessed as if it's remote). Similarly, the password was supplied to the remote host. Then, the ls command is sent to be executed. Finally, to disconnect from the remote host, the exit command is sent, and all session data received from the remote host is printed on screen using the read_all() method. Copying a file to a remote machine by SFTP If you want to upload or copy a file from your local machine to a remote machine securely, you can do so via Secure File Transfer Protocol (SFTP). Getting ready This recipe uses a powerful third-party networking library, Paramiko, to show you an example of file copying by SFTP, as shown in the following command. You can grab the latest code of Paramiko from GitHub (https://github.com/paramiko/paramiko) or PyPI: $ pip install paramiko How to do it... This recipe takes a few command-line inputs: the remote hostname, server port, source filename, and destination filename. For the sake of simplicity, we can use default or hard-coded values for these input parameters. In order to connect to the remote host, we need the username and password, which can be obtained from the user from the command line. Listing 7.2 explains how to copy a file remotely by SFTP, as shown in the following code: #!/usr/bin/env python # Python Network Programming Cookbook -- Chapter - 7 # This program is optimized for Python 2.7. # It may run on any other version with/without modifications.   import argparse import paramiko import getpass     SOURCE = '7_2_copy_remote_file_over_sftp.py' DESTINATION ='/tmp/7_2_copy_remote_file_over_sftp.py '     def copy_file(hostname, port, username, password, src, dst):   client = paramiko.SSHClient()   client.load_system_host_keys()   print " Connecting to %s n with username=%s... n" %(hostname,username)   t = paramiko.Transport((hostname, port))   t.connect(username=username,password=password)   sftp = paramiko.SFTPClient.from_transport(t)   print "Copying file: %s to path: %s" %(SOURCE, DESTINATION)   sftp.put(src, dst)   sftp.close()   t.close()     if __name__ == '__main__':   parser = argparse.ArgumentParser(description='Remote file copy')   parser.add_argument('--host', action="store", dest="host", default='localhost')   parser.add_argument('--port', action="store", dest="port", default=22, type=int)   parser.add_argument('--src', action="store", dest="src", default=SOURCE)   parser.add_argument('--dst', action="store", dest="dst", default=DESTINATION)     given_args = parser.parse_args()   hostname, port =  given_args.host, given_args.port   src, dst = given_args.src, given_args.dst     username = raw_input("Enter the username:")   password = getpass.getpass("Enter password for %s: " %username)     copy_file(hostname, port, username, password, src, dst) If you run this script, you will see an output similar to the following: $ python 7_2_copy_remote_file_over_sftp.py Enter the username:faruq Enter password for faruq:  Connecting to localhost  with username=faruq... Copying file: 7_2_copy_remote_file_over_sftp.py to path: /tmp/7_2_copy_remote_file_over_sftp.py How it works... This recipe can take the various inputs for connecting to a remote machine and copying a file over SFTP. This recipe passes the command-line input to the copy_file() function. It then creates a SSH client calling the SSHClient class of paramiko. The client needs to load the system host keys. It then connects to the remote system, thus creating an instance of the transport class. The actual SFTP connection object, sftp, is created by calling the SFTPClient.from_transport() function of paramiko. This takes the transport instance as an input. After the SFTP connection is ready, the local file is copied over this connection to the remote host using the put() method. Finally, it's a good idea to clean up the SFTP connection and underlying objects by calling the close() method separately on each object.
Read more
  • 0
  • 0
  • 9687
article-image-connecting-backend-servers-should-know
Packt
27 May 2013
5 min read
Save for later

Connecting to backend servers (Should know)

Packt
27 May 2013
5 min read
(For more resources related to this topic, see here.) Getting ready If you have a server architecture diagram, that's a good place to start listing all the required servers and grouping them, but you'll also need some technical data about those servers. You may find this information in a server monitoring diagram, where you will find the IP addresses, ports, and luckily a probing URL for health checks. In our case, the main VCL configuration file default.vcl is located at /etc/varnish and defines the configuration that the Varnish Cache will use during the life cycle of the request, including the backend servers list. How to do it... Open the default vcl file by using the following command: # sudo vim /etc/varnish/default.vcl A simple backend declaration would be: backend server01 {.host = "localhost";.port = "8080";} This small block of code indicates the name of the backend (server01), and also the hostname or IP, and which port to connect to. Save the file and reload the configuration using the following command: # sudo service varnish reload At this point, Varnish will proxy every request to the first declared backend using its default VCL file. Give it a try and access a known URL (like the index of your website) through the Varnish Cache and make sure that the content is delivered as it would be without Varnish. For testing purposes, this is an okay backend declaration, but we need to make sure that our backend servers are up and waiting for requests before we really start to direct web traffic to them. Let's include a probing request to our backend: backend website {.host = "localhost";.port = "8080";.probe = {.url = "/favicon.ico";.timeout = 60ms;.interval = 2s;.window = 5;.threshold = 3;}} Varnish will now probe the backend server using the provided URL with a timeout of 60 ms, every couple of seconds. To determine if a backend is healthy, it will analyze the last five probes. If three of them result in 200 – OK, the backend is marked as Healthy and the requests are forwarded to this backend; if not, the backend is marked as Sick and will not receive any incoming requests until it's Healthy again. Probe the backend servers that require additional information: In case your backend server requires extra headers or has an HTTP basic authentication, you can change the probing from URL to Request and specify a raw HTTP request. When using the Request probe, you'll always need to provide a Connection: close header or it will not work. This is shown in the following code snippet: backend api {.host = "localhost";.port = "8080";.probe = {.request ="GET /status HTTP/1.1""Host: www.yourhostname.com""Connection: close""X-API-Key: e4d909c290d0fb1ca068ffaddf22cbd0""Accept: application/json".timeout = 60ms;.interval = 2s;.window = 5;.threshold = 3;}} Choose a backend server based on incoming data: After declaring your backend servers, you can start directing the clients' requests. The most common way to choose which backend server will respond to a request is according to the incoming URL, as shown in the following code snippet: vcl_recv {if ( req.url ~ "/api/") {set req.backend = api;} else {Set req.backend = website;}} Based on the preceding configuration, all requests that contain /api/ (www.yourdomain.com/api/) in the URL will be sent to the backend named api and the others will reach the backend named website. You can also pick the correct backend server, based on User-agent header, Client IP (geo-based), and pretty much every information that comes with the request. How it works... By probing your backend servers, you can automate the removal of a sick backend from your cluster, and by doing so, you avoid delivering a broken page to your customer. As soon as your backend starts to behave normally, Varnish will add it back to the cluster pool. Directing requests to the appropriate backend server is a great way to make sure that every request reaches its destination, and gives you the flexibility to provide content based on the incoming data, such as a mobile device or an API request. There's more... If you have lots of servers to be declared as backend, you can declare probes as a separated configuration block and make reference to that block later at the backend specifications, avoiding repetition and improving the code's readability. probe favicon {.url = "/favicon.ico";.timeout = 60ms;.interval = 2s;.window = 5;.threshold = 3;}probe robots {.url = "/robots.txt";.timeout = 60ms;.interval = 2s;.window = 5;.threshold = 3;}backend server01 {.host = "localhost";.port = "8080";.probe = favicon;}backend server02 {.host = "localhost";.port = "8080";.probe = robots;} The server01 server will use the probe named favicon, and the server02 server will use the probe named robots. Summary This article explained how to connect to backend servers. Resources for Article : Further resources on this subject: Security in Plone Sites [Article] Professional Plone Development: Foreword by Alexander Limi [Article] Microsoft SQL Server 2008 High Availability: Understanding Domains, Users, and Security [Article]
Read more
  • 0
  • 0
  • 9433

article-image-networking-tcl-using-udp-sockets
Packt
29 Jun 2010
10 min read
Save for later

Networking in Tcl: Using UDP Sockets

Packt
29 Jun 2010
10 min read
(For more resources on Tcl, see here.) TCP support is built in to the core of the Tcl interpreter. To be able to use the UDP protocol, you have to use an external package. The default choice is usually the TclUDP extension, which is available from http://sourceforge.net/projects/tcludp/ (it also comes as a part of ActiveTcl bundle; if you don't have it, install it with teacup install udp). In contrast to TCP, which is a connection-oriented protocol, UDP is connection-less. This means that every data package (datagram) travels from one peer to another on its own, without a return acknowledgement or retransmission in the case of lost packets. What is more, one of the peers may send packages that are never received (for example if the second peer is not listening at the moment), and there is no feedback information that something is going wrong. This implies a difference in the design for handling the transmission, which will be illustrated in the following example. Creating a UDP-based client Lets consider a simple 'time server', where the server sends the current time to any client application that subscribes for such notifications, of course using UDP connectivity. The format of each datagram will be rather simple: it will contain only the current time expressed in seconds. First let's have a look on client code: package require udpset s [udp_open]fconfigure $s -buffering nonefconfigure $s -remote [list 127.0.0.1 9876]puts -nonewline $s "subscribe"proc readTime {channel} { puts "Time from server: [read $channel]"}fileevent $s readable [list readTime $s]vwait foreverclose $s As you have probably figured out, the first line loads the TclUDP extension. The next line creates a UDP socket, using the udp_open command, and stores its reference in the s variable. The UDP protocol uses ports in the same way as TCP. If we executed udp_open 1234, the port value 1234 would be specified, but if omitted, the operating system would assign a random port. Note that if you specify a port that is already being used by any other program, an error will be generated. Next, we set the buffering mode to none, meaning that the output buffer will be automatically flushed after every output operation. We will discuss buffering issues more deeply later in this example. The newly created UDP socket is not connected to anything, as the UDP is connection-less. Such a socket is able to receive packets as they arrive at any time from any source, without establishing a data connection of any type. To have datagrams be sent to a specific destination, you should use the fconfigure command with a new option (introduced by TclUDP) –remote, along with a two-item list containing the target address and port: fconfigure $s -remote [list 127.0.0.1 9876]. In this example the server will be executed on local host (so you are able to run it even if you are not part of a network). Note that you can call this command any time you wish, causing successive datagrams to be sent to different peers. Now it is time to send a message to the server – in this case simply a string containing 'subscribe'. If –nonewline is omitted, puts would generate 2 datagrams (the second one containing the newline character) – it is likely that the puts implementation will write data twice to the buffer (the message, and then the new line character), and as the buffering is set to none, it is flushed immediately after each write. The other solution would be to set buffering to full and call flush $s after each socket write. The handling of incoming data is implemented based on event programming. The line: fileevent $s readable [list readTime $s] defines that every time the socket has some data to read (is readable), the command readTime with $s as an argument is called. The command itself is simple – it prints to the screen every piece of data that comes from the socket, read with the read $s command. Implementing service using UDP The code for the server is a bit more complicated, due to a need to track subscribed clients: package require udpset clients [list]proc registerClient {s} { global clients lappend clients [fconfigure $s -peer]}proc sendTime {s} { global clients foreach peer $clients { puts "sending to $peer" fconfigure $s -remote $peer puts -nonewline $s [clock seconds] } after 1000 [list sendTime $s]}set server [udp_open 9876]fconfigure $server -buffering nonefileevent $server readable [list registerClient $server]sendTime $servervwait forever The list named clients will hold an entry for each subscribed client; each entry is also a list containing IP address and port, so it suits perfectly for the fconfigure $s –remote command. The server opens a UDP socket on port 9876. We would like to avoid the word 'listens' in this context, as this socket does not differ in any way from the one used by the client. By contrast, TCP requires a special server type socket, for listening purposes. On every incoming data even, the registerClient procedure is executed. The command appends to the client's list information about the originator of the data (usually referred to as a peer) that has just arrived. This information is retrieved with fconfigure $s –peer. Although it may seem that this data is defined for the socket (represented by $s), in reality it refers to the most recent datagram received by this socket.   Every one second the procedure sendTime is called. The purpose of this command is to send the current time to all subscribed clients, so it iterates over the clients list, and for each one it first configures the socket with the target address and port (fconfigure $s -remote $peer), and then sends a datagram containing the time in the form of the output from the clock seconds command.   The server code is simple, it runs forever and there is no way to unsubscribe from receiving the data, but it demonstrates how to work with UDP in Tcl. The following picture shows an example of the execution of the server (timeServer.tcl)and two clients (timeClient.tcl): The first client connects from the port 4508, and the second one (started a few seconds later) from 4509. The most important observation is that UDP sockets are handled identically on both the client and server, so the name 'server' is actually contractual. It is worth mentioning that TclUDP supports multicasting and broadcasting of UDP packets. For details of how to perform this, please consult the package's manual. Sending reliable messages The UDP protocol lacks reliability, which is one of its main differences compared to TCP. Applications using UDP must either accept the fact that some of the datagrams may be lost, or implement equivalent functionality on their own. The same is true of topics like the order of incoming packets and data integrity. The implementation of such logic could be as follows in the following example that follows — the sender calculates the MD5 checksum of the data, and sends both to the receiver. The receiver calculates the checksum again and compares it to the received one – and in the case of equality, sends acknowledgment (in this example, the checksum is sent back). The sender will repeatedly attempt to send the data until the confirmation is received, or the permitted number of attempts has been reached. The sender code is as follows: package require udppackage require md5set s [udp_open]fconfigure $s -buffering nonefconfigure $s -remote [list 127.0.0.1 9876]proc randomData {length} { set result "" for {set x 0} {$x<$length} {incr x} { set result "$result[expr { int(2 * rand()) }]" } return $result}proc sendPacket {chan contents {retryCount 3}} { variable ackArray if {$retryCount < 1} { puts "packet delivery failure" return } set md5 [md5::md5 -hex $contents] # if ack received, remove ack and do not send again if {[info exists ackArray($md5)]} { puts "packet successfully delivered" unset ackArray($md5) return } puts "sending packet, # of retries: $retryCount" puts "packet content: $md5$contents" puts -nonewline $chan "$md5$contents" flush $chan # handle retries incr retryCount -1 after 1000 sendPacket [list $chan $contents $retryCount]}proc recvAckPacket {chan} { variable ackArray set md5 [read $chan] puts "received ack: $md5" set ackArray($md5) 1 }sendPacket $s [randomData 48]after 5000 sendPacket $s [randomData 48]after 10000 sendPacket $s [randomData 48]fileevent $s readable [list recvAckPacket $s]vwait forever The main logic is located in the sendPacket procedure. The last parameter is the number of retries left to deliver the data. The procedure calculates the MD5 checksum of the data to be sent (stored in contents variable) and first checks if the appropriate acknowledgment has already been received – if the array ackArray contains the entry for the checksum (that is concurrently an acknowledgment), it is removed and the datagram is considered to have been delivered. If it is not, then the checksum along with the data is sent to the receiver, and a sendPacket is scheduled to be executed again after one second, every time with retries counter decreased. If the procedure is called when the counter is equal to zero, the delivery is considered to be negative. The acknowledgments are received by the procedure recvAckPacket, which simply stores it into ackArray, allowing sendPacket to find it and react appropriately. The helper procedure randomData allows the generation of a random string of zeroes and ones of a given length. Note that this example does not cover the topic of received packets order. The receiver code: package require udppackage require md5 set server [udp_open 9876]fconfigure $server -buffering nonefileevent $server readable [list recvPacket $server]proc recvPacket {chan} { variable readPackets set data [read $chan] puts "received: $data" set md5 [string range $data 0 31] set contents [string range $data 32 end] if {$md5 != [md5::md5 -hex $contents]} { #the data are malformed puts "malformed data" return } # send an ack anyway, because original # might not have been received by other peer fconfigure $chan -remote [fconfigure $chan -peer] #simulate the ack package lost over network if {10*rand() > 7} { puts -nonewline $chan $md5 flush $chan } # check if this packet is not a duplicate if {[info exists readPackets($md5)]} { return } set readPackets($md5) [clock seconds] # handle packet here...}proc periodicCleanup {} { variable readPackets set limit [clock scan "-300 seconds"] foreach {md5 clock} [array get readPackets] { if {$clock < $limit} { unset readPackets($md5) } } after 60000 periodicCleanup}vwait forever The receiver will send back the acknowledgement each time the correct datagram is received, that is when the checksums sent (first 32 chars) and calculated locally are equal. It also stores in the readPackets array the time of arrival of each packet, which allows us to detect duplicated data and processing it only once. To make the example more vivid, about 70% of data loss is simulated by randomly not sending confirmations. The receiver also implements some simple logic for periodic clean up of the received datagrams log, to prevent it from becoming too huge and memory consumptive. The result of running the example can be as depicted: In this example, the first datagram was delivered successfully on the first attempt, the second one's delivery failed despite 3 attempts, and the last one was delivered on the second try. Summary In this article we saw how to handle the UDP communication in Tcl, with TclUDP extension as the implementation. Further resources on this subject: Tcl: Handling Email [article] Extending Applications in Tcl [article] Managing Certificates from Tcl [article]
Read more
  • 0
  • 0
  • 9185
Modal Close icon
Modal Close icon