Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Networking

109 Articles
article-image-using-ipv6-packet-tracer
Packt
13 Jan 2014
6 min read
Save for later

Using IPv6 on Packet Tracer

Packt
13 Jan 2014
6 min read
This article is written by Jesin A the author of Packet Tracer Network Simulator. Cisco Packet Tracer is a powerful network simulation program and provides simulation, visualization, authoring, assessment, and shows collaboration capabilities of a network. This article explains the IPv6 addresses used in Packet Tracer. IPv4 has 4.3 billion addresses, which may seem mindboggling. However, it took only two decades for it to reach its depletion. IPv6 has come to the rescue in the form of 128-bit addresses. Packet Tracer supports a wide array of IPv6 features. We'll start by learning how to assign IP addresses to different devices and how to configure routing between them. Finally, we'll create a setup that enables IPv6 communication over IPv4 devices. Assigning IPv6 addresses Starting from Packet Trace Version 6, the IP Configuration utility under the Desktop tab of end devices has an option to enter an IPv6 address. Let's begin with a simple topology consisting of two PCs and a router connected to a switch, as shown in the following screenshot: There are three ways of assigning IPv6 addresses to a device and we'll see each one of them. Autoconfiguration Autoconfiguration requires the least amount of configuration but makes it difficult to remember the IPv6 addresses. This method uses the MAC address of the device to create an IPv6 address with the FE80:: prefix. Carry out the following steps to assign IPv6 addresses using Autoconfiguration: Begin by configuring the router. Enter the interface configuration mode and enable IPv6 on the interface. R0(config)#ipv6 unicast-routing R0(config)#interface FastEthernet0/0 R0(config-if)#ipv6 enable Next, we will configure a link local address and a global unicast address on this interface. We'll use eui-64 to reduce the configuration. R0(config-if)#ipv6 address autoconfig R0(config-if)#ipv6 add 2000::/64 eui-64 R0(config-if)#no shutdown Verify that the interface is up and has two IPv6 addresses. R0>sh ipv6 interface brief FastEthernet0/0 [up/up] FE80::2D0:58FF:FE65:E701 2000::2D0:58FF:FE65:E701 These IPv6 addresses may vary when you try them out, as they are based on the MAC address. Enable routing so that this router can be identified as a default gateway. R0(config)#ipv6 unicast-routing The configuration of the router is now done, let's move on to the PCs. Go to the Desktop tab of the PC, open IP Configuration , and under the IPv6 Configuration section, choose Auto Config . The gateway and the PC's IP address will be assigned automatically, as shown in the following screenshot: Use the simple PDU tool to test the connectivity; you'll see ICMPv6 packets moving between the nodes. To view the IPv6 address from the command line of PCs, use the ipv6config command. Static IPv6 IPv6 addresses can also be assigned statically on all devices. We'll use the same topology for this section too. We'll carry out the following steps to configure IPv6 addresses statically: Begin by configuring a static IPv6 address on the router. R0(config)#interface fastethernet0/0 R0(config-if)#ipv6 enable R0(config-if)#ipv6 address 2000::1/64 R0(config-if)#no shutdown Go to the Desktop tab of PC, open the IP Configuration utility, and enter an IPv6 address with the same prefix. Now use the simple PDU tool to test the connectivity. Once both the methods work fine, you can have a look at the IPv6 neighbors table. This is similar to the ARP table of IPv4. R0#sh ipv6 neighbor IPv6 Address Age Link-layer Addr State Interface 2000::2 0 00E0.A39E.05C4 REACH Fa0/0 2000::3 0 0001.43B9.0268 REACH Fa0/0 Now that we have configured IPv6 addresses on a single network, let's configure them on more networks and enable routing between them. IPv6 static and dynamic routing Similar to IPv4, IPv6 too supports both static and dynamic routing. Configuration commands for its static routing are similar to IPv4. Static routing Modifying the same topology that we used previously, let's add a router, switch, and two PCs to create a separate network, as shown in the following screenshot: The first network will use addresses starting from 2000:1::/64 and the second network will use addresses starting from 2000:2::/64. The link between both the routers will have IP addresses 2001::10/64 and 2001::20/64. Here is a table describing the topology: Device Interface IP address R1 FastEthernet0/0 2000:1::1/64   FastEthernet0/1 2001::10/64 PC0 FastEthernet 2000:1::2/64 PC1 FastEthernet 2000:1::3/64 R2 FastEthernet0/0 2000:2::1/64   FastEthernet0/1 2001::20/64 PC2 FastEthernet 2000:2::2/64 PC3 FastEthernet 2000:2::3/64 After the necessary IP addresses and gateways have been assigned, open the CLI tab for the R1 router, and start configuring routing by following the given commands: R1(config)#ipv6 unicast-routing R1(config)#ipv6 route 2000:2::/64 2001::20 Next, open the CLI tab for R2 and configure routing on it. R2(config)#ipv6 unicast-routing R2(config)#ipv6 route 2000:1::/64 2001::10 Now use the simple PDU tool to test the connectivity. You may also use the tracert command on a PC to see the path a packet takes. PC>tracert 2000:2::3 Tracing route to 2000:2::3 over a maximum of 30 hops: 1 63 ms 63 ms 47 ms 2000:1::1 2 94 ms 78 ms 94 ms 2001::20 3 156 ms 109 ms 129 ms 2000:2::3 Trace complete. Dynamic routing Packet Tracer offers the same dynamic routing protocols for IPv6: RIPv6, EIGRP, and OSPF. We'll be configuring RIPv6 in this section. Note that RIPv6 does not represent RIP Version 6; it is RIP for IPv6 addresses. For this exercise, we'll use the topology shown in the following screenshot: The additional IP assignment details alone are shown in the following table: Device Interface IPv6 Address R2 FastEthernet1/0 2001:1::10/64 R3 FastEthernet0/0 2000:3::1/64   FastEthernet0/1 2001:1::20/64 PC2 FastEthernet 2000:3::2/64 We'll see how to configure RIP on one router and you can do the same on the others. R1(config)#interface FastEthernet0/0 R1(config-if)#ipv6 address 2000:1::1/64 R1(config-if)#ipv6 rip Net1 enable R1(config-if)#ipv6 enable R1(config-if)#interface FastEthernet0/1 R1(config-if)#ipv6 address 2001::10/64 R1(config-if)#ipv6 rip Net1 enable R1(config-if)#ipv6 enable Note that the ipv6 rip command is used to enable RIP on a particular interface. Entering ipv6 rip Net1 enable on the first interface begins the RIPv6 process. The Net1 string can be any name that can be used to name the RIP process. Once configured, use the usual diagnostic tools (ping to simple PDU) to check the connectivity. To view the RIP database, use the following command: R1#sh ipv6 rip database RIP process "Net1" local RIB 2000:2::/64, metric 2, installed FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec 2000:3::/64, metric 3, installed FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec 2001::/64, metric 2 FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec 2001:1::/64, metric 2, installed FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec RIP process "LINK" local RIB Trace the route of the packet to see the path it takes. PC>tracert 2000:3::2 Tracing route to 2000:3::2 over a maximum of 30 hops: 1 31 ms 32 ms 31 ms 2000:1::1 2 50 ms 50 ms 63 ms 2001::20 3 94 ms 94 ms 94 ms 2001:1::20 4 125 ms 109 ms 125 ms 2000:3::2 Trace complete. Summary In this article, we learned how to use IPv6 with Packet Tracer. We saw the limitation of the IPv4 addresses. We also learned how to assign IPv6 addresses and how to configure IPv6 static and dynamic routing. Resources for Article : How to edit the attributes in QGIS Troubleshooting OpenStack Compute problems Creating Identity and Resource Pools in Cisco Unified Computing System
Read more
  • 0
  • 0
  • 42915

article-image-wireshark-analyze-malicious-emails-in-pop-imap-smtp
Vijin Boricha
29 Jul 2018
10 min read
Save for later

Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial]

Vijin Boricha
29 Jul 2018
10 min read
One of the contributing factors in the evolution of digital marketing and business is email. Email allows users to exchange real-time messages and other digital information such as files and images over the internet in an efficient manner. Each user is required to have a human-readable email address in the form of username@domainname.com. There are various email providers available on the internet, and any user can register to get a free email address. There are different email application-layer protocols available for sending and receiving mails, and the combination of these protocols helps with end-to-end email exchange between users in the same or different mail domains. In this article, we will look at the normal operation of email protocols and how to use Wireshark for basic analysis and troubleshooting. This article is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. The three most commonly used application layer protocols are POP3, IMAP, and SMTP: POP3: Post Office Protocol 3 (POP3) is an application layer protocol used by email systems to retrieve mail from email servers. The email client uses POP3 commands such as LOGIN, LIST, RETR, DELE, QUIT to access and manipulate (retrieve or delete) the email from the server. POP3 uses TCP port 110 and wipes the mail from the server once it is downloaded to the local client. IMAP: Internet Mail Access Protocol (IMAP) is another application layer protocol used to retrieve mail from the email server. Unlike POP3, IMAP allows the user to read and access the mail concurrently from more than one client device. With current trends, it is very common to see users with more than one device to access emails (laptop, smartphone, and so on), and the use of IMAP allows the user to access mail any time, from any device. The current version of IMAP is 4 and it uses TCP port 143. SMTP: Simple Mail Transfer Protocol (SMTP) is an application layer protocol that is used to send email from the client to the mail server. When the sender and receiver are in different email domains, SMTP helps to exchange the mail between servers in different domains. It uses TCP port 25: As shown in the preceding diagram, SMTP is the email client used to send the mail to the mail server, and POP3 or IMAP is used to retrieve the email from the server. The email server uses SMTP to exchange the mail between different domains. In order to maintain the privacy of end users, most email servers use different encryption mechanisms at the transport layer. The transport layer port number will differ from the traditional email protocols if they are used over secured transport layer (TLS). For example, POP3 over TLS uses TCP port 995, IMAP4 over TLS uses TCP port 993, and SMTP over TLS uses port 465. Normal operation of mail protocols As we saw above, the common mail protocols for mail client to server and server to server communication are POP3, SMTP, and IMAP4. Another common method for accessing emails is web access to mail, where you have common mail servers such as Gmail, Yahoo!, and Hotmail. Examples include Outlook Web Access (OWA) and RPC over HTTPS for the Outlook web client from Microsoft. In this recipe, we will talk about the most common client-server and server-server protocols, POP3 and SMTP, and the normal operation of each protocol. Getting ready Port mirroring to capture the packets can be done either on the email client side or on the server side. How to do it... POP3 is usually used for client to server communications, while SMTP is usually used for server to server communications. POP3 communications POP3 is usually used for mail client to mail server communications. The normal operation of POP3 is as follows: Open the email client and enter the username and password for login access. Use POP as a display filter to list all the POP packets. It should be noted that this display filter will only list packets that use TCP port 110. If TLS is used, the filter will not list the POP packets. We may need to use tcp.port == 995 to list the POP3 packets over TLS. Check the authentication has been passed correctly. In the following screenshot, you can see a session opened with a username that starts with doronn@ (all IDs were deleted) and a password that starts with u6F. To see the TCP stream shown in the following screenshot, right-click on one of the packets in the stream and choose Follow TCP Stream from the drop-down menu: Any error messages in the authentication stage will prevent communications from being established. You can see an example of this in the following screenshot, where user authentication failed. In this case, we see that when the client gets a Logon failure, it closes the TCP connection: Use relevant display filters to list the specific packet. For example, pop.request.command == "USER" will list the POP request packet with the username and pop.request.command == "PASS" will list the POP packet carrying the password. A sample snapshot is as follows: During the mail transfer, be aware that mail clients can easily fill a narrow-band communications line. You can check this by simply configuring the I/O graphs with a filter on POP. Always check for common TCP indications: retransmissions, zero-window, window-full, and others. They can indicate a busy communication line, slow server, and other problems coming from the communication lines or end nodes and servers. These problems will mostly cause slow connectivity. When the POP3 protocol uses TLS for encryption, the payload details are not visible. We explain how the SSL captures can be decrypted in the There's more... section. IMAP communications IMAP is similar to POP3 in that it is used to retrieve the mail from the server by the client. The normal behavior of IMAP communication is as follows: Open the email client and enter the username and password for the relevant account. Compose a new message and send it from any email account. Retrieve the email on the client that is using IMAP. Different clients may have different ways of retrieving the email. Use the relevant button to trigger it. Check you received the email on your local client. SMTP communications SMTP is commonly used for the following purposes: Server to server communications, in which SMTP is the mail protocol that runs between the servers In some clients, POP3 or IMAP4 are configured for incoming messages (messages from the server to the client), while SMTP is configured for outgoing messages (messages from the client to the server) The normal behavior of SMTP communication is as follows: The local email client resolves the IP address of the configured SMTP server address. This triggers a TCP connection to port number 25 if SSL/TLS is not enabled. If SSL/TLS is enabled, a TCP connection is established over port 465. It exchanges SMTP messages to authenticate with the server. The client sends AUTH LOGIN to trigger the login authentication. Upon successful login, the client will be able to send mails. It sends SMTP message such as "MAIL FROM:<>", "RCPT TO:<>" carrying sender and receiver email addresses. Upon successful queuing, we get an OK response from the SMTP server. The following is a sample SMTP message flow between client and server: How it works... In this section, let's look into the normal operation of different email protocols with the use of Wireshark. Mail clients will mostly use POP3 for communication with the server. In some cases, they will use SMTP as well. IMAP4 is used when server manipulation is required, for example, when you need to see messages that exist on a remote server without downloading them to the client. Server to server communication is usually implemented by SMTP. The difference between IMAP and POP is that in IMAP, the mail is always stored on the server. If you delete it, it will be unavailable from any other machine. In POP, deleting a downloaded email may or may not delete that email on the server. In general, SMTP status codes are divided into three categories, which are structured in a way that helps you understand what exactly went wrong. The methods and details of SMTP status codes are discussed in the following section. POP3 POP3 is an application layer protocol used by mail clients to retrieve email messages from the server. A typical POP3 session will look like the following screenshot: It has the following steps: The client opens a TCP connection to the server. The server sends an OK message to the client (OK Messaging Multiplexor). The user sends the username and password. The protocol operations begin. NOOP (no operation) is a message sent to keep the connection open, STAT (status) is sent from the client to the server to query the message status. The server answers with the number of messages and their total size (in packet 1042, OK 0 0 means no messages and it has a total size of zero) When there are no mail messages on the server, the client send a QUIT message (1048), the server confirms it (packet 1136), and the TCP connection is closed (packets 1137, 1138, and 1227). In an encrypted connection, the process will look nearly the same (see the following screenshot). After the establishment of a connection (1), there are several POP messages (2), TLS connection establishment (3), and then the encrypted application data: IMAP The normal operation of IMAP is as follows: The email client resolves the IP address of the IMAP server: As shown in the preceding screenshot, the client establishes a TCP connection to port 143 when SSL/TSL is disabled. When SSL is enabled, the TCP session will be established over port 993. Once the session is established, the client sends an IMAP capability message requesting the server sends the capabilities supported by the server. This is followed by authentication for access to the server. When the authentication is successful, the server replies with response code 3 stating the login was a success: The client now sends the IMAP FETCH command to fetch any mails from the server. When the client is closed, it sends a logout message and clears the TCP session. SMTP The normal operation of SMTP is as follows: The email client resolves the IP address of the SMTP server: The client opens a TCP connection to the SMTP server on port 25 when SSL/TSL is not enabled. If SSL is enabled, the client will open the session on port 465: Upon successful TCP session establishment, the client will send an AUTH LOGIN message to prompt with the account username/password. The username and password will be sent to the SMTP client for account verification. SMTP will send a response code of 235 if authentication is successful: The client now sends the sender's email address to the SMTP server. The SMTP server responds with a response code of 250 if the sender's address is valid. Upon receiving an OK response from the server, the client will send the receiver's address. SMTP server will respond with a response code of 250 if the receiver's address is valid. The client will now push the actual email message. SMTP will respond with a response code of 250 and the response parameter OK: queued. The successfully queued message ensures that the mail is successfully sent and queued for delivery to the receiver address. We have learned how to analyse issues in POP, IMAP, and SMTP  and malicious emails. Get to know more about  DNS Protocol Analysis and FTP, HTTP/1, AND HTTP/2 from our book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6? Analyzing enterprise application behavior with Wireshark 2 Capturing Wireshark Packets
Read more
  • 0
  • 0
  • 38060

article-image-getting-started-with-fortigate-troubleshooting
Packt
20 Nov 2013
6 min read
Save for later

Getting Started with Fortigate: Troubleshooting

Packt
20 Nov 2013
6 min read
Base system diagnostics The status screen in the web-based manager includes a high level overview of information such as the system time (that is important, for example, to have coherent error messages and log recording), CPU and memory usage, license information, and alerts, as we can see in the following screenshot: Although this screen is useful for a rapid assessment of the situation, our diagnostic tools usually have to dig deeper. The first base command we will use in the CLI is get system. This command can open more than eighty information options, dedicated to the different features of the FortiGate units. Among the others, we are able to check counters related to performance, such as: Startup configuration errors with the get system startup-error-log command. Firewall traffic statistics related to the traffic with the get system performance firewall statistics command. Firewall packet distribution statistics with the get system performance firewall packet-distribution command. Information about the most intensive CPU processes with the get system performance top, that will show a screen divided in columns, as we can see in the following screenshot: Another fundamental command we will use is diagnose hardware, which is used for problem-solving procedures related to certificates, devices, PCI, and system information. The devices menu is opened with the diagnose hardware deviceinfo, and includes a disk option to recover information about internal disks (if present) and a nic option to display data from network interfaces. The latter also shows on screen the errors and the drops related to network packets, as we can see in the following screenshot: To have access to real-time information, we will use the diagnose debug command. The diagnose debug report is not a troubleshooting tool, but is used to create a report for the Fortinet technical support. We will talk about additional options for the diagnose debug command later, in relation to TCP/IP debugging. Troubleshooting routing The tools that we will see in the following paragraphs will be required to troubleshoot the addressing and routing features of the TCP/IP protocol. Before we proceed to explain the single tools and commands for troubleshooting, we can take advantage of a real-world suggestion. In order to perform the troubleshooting steps in a more comfortable way, it is often advisable to use a client for SSH and Telnet such as PuTTY (http://bit.ly/1kyS98), to launch two separate sessions on a FortiGate unit. One of the two consoles will be dedicated to watch the results of the debug commands. The second console will be dedicated to launch commands, such as ping and traceroute that we will use to trigger actions that will be visible in the first open console. In the following screenshot we have a diagnose sniffer packet port1 icmp command running on the session opened to the left-hand side and an execute ping command on the session opened on the right-hand side window: Layer 2 and layer 3 TCP/IP diagnostics Some issues can be solved only by correcting the ARP table that associates IP and MAC addresses. The diagnose ip arp list command shows the ARP cache as shown in the following screenshot: The following commands are used to manage the ARP cache: The execute clear system arp table command to remove the ARP cache. The diagnose ip arp delete <interface name> <IP address> command to remove a single ARP entry. The diagnose ip arp flush <interface name> command to remove all entries associated with a single interface. The config system arp-table command to add a static ARP entry. This command requires two further commands: The config system arp-table command The edit command to create a new entry and to modify an existing entry or to create a new one Three mandatory parameters are: set mac, to configure a MAC address for the entry set ip, to configure an IP address for the entry set interface, to select the interface that is connected to the MAC and IP In the following screenshot we can see all the required steps to add the entry number 3 on our ARP cache with the following parameters: ip 192.168.12.1 with a mac F0:DE:F1:E4:75:B9 on the internal interface: We can now take care of layer 3, especially from the point of view of routing. As in any device that manages networking, the most used command (included in the ICMP protocol) is the ping command. A FortiGate unit supports two kinds of ping commands: execute ping <IP address> and a command dedicated to modify the behavior of the ping command, execute ping-options, that includes parameters such as: data-size: To select the datagram size in bytes (between 0 and 65507) interval: To set a value in seconds between two pings repeat-count: To select the number of pings to send source: To specify a source interface (default value is auto-select) view-settings: Used to show the current ping options timeout: To specify time out in seconds In the following screenshot we have modified some ping parameters and verified them with the view-settings parameter: Another fundamental command, based on ICMP is execute traceroute <dest>, that allows us to see all the hops (networking devices) that a network packet traverses, starting from the FortiGate to a destination (which can be an IP address or an FQDN). Having the full path shown can be important to detect a wrong or faulty hop along the path. The usefulness of traceroute is related to how many devices along the route allow the use of the ICMP protocol, but also if we use it only inside to troubleshoot our internal corporate network, the results of this simple command are extremely useful. To show the result of a traceroute and have fun along the way, we can use the so called "Star Wars Traceroute"; execute traceroute 216.81.59.173, that will show the opening crawl to Star Wars Episode IV (a result that was obtained making clever use of hostnames and routing). We can see a (small) part of the result in the following screenshot: The next logical step to debug problems at layer 3 of TCP/IP is to verify the routing table, something that we are able to do with the get router info routing-table all command. The resulting information text could be very lengthy, so we are able to filter the output using the parameters including: details: Show routing table details information rip: Show RIP routing table ospf: Show OSPF routing table isis: Show ISIS routing table static: Show static routing table connected: Show connected routing table database: Show routing information base The routing table shows the routing entries and their origin (the routing protocol that added an entry in the routing table). Summary In this article, the authors have made the understanding of the Base system diagnostics, the troubleshooting of routing, and layer 2 and layer 3 TCP/IP diagnostics better. Useful Links: vCloud Networks Network Virtualization and vSphere Supporting hypervisors by OpenNebula
Read more
  • 0
  • 0
  • 22587

article-image-linux-kernel-announces-a-patch-to-allow-0-0-0-0-8-as-a-valid-address-range
Savia Lobo
15 Jul 2019
6 min read
Save for later

Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range

Savia Lobo
15 Jul 2019
6 min read
Last month, the team behind Linux kernel announced a patch that allows 0.0.0.0/8 as a valid address range. This patch allows for these 16m new IPv4 addresses to appear within a box or on the wire. The aim is to use this 0/8 as a global unicast as this address was never used except the 0.0.0.0. In a post written by Dave Taht, Director of the Make-Wifi-Fast, and committed by David Stephen Miller, an American software developer working on the Linux kernel mentions that the use of 0.0.0.0/8 has been prohibited since the early internet due to two issues. First, an interoperability problem with BSD 4.2 in 1984, which was fixed in BSD 4.3 in 1986. “BSD 4.2 has long since been retired”, the post mentions. The second issue is that addresses of the form 0.x.y.z were initially defined only as a source address in an ICMP datagram, indicating "node number x.y.z on this IPv4 network", by nodes that know their address on their local network, but do not yet know their network prefix, in RFC0792 (page 19). The use of 0.x.y.z was later repealed in RFC1122 because the original ICMP-based mechanism for learning the network prefix was unworkable on many networks such as Ethernet. This is because these networks have longer addresses that would not fit into the 24 "node number" bits. Modern networks use reverse ARP (RFC0903) or BOOTP (RFC0951) or DHCP (RFC2131) to find their full 32-bit address and CIDR netmask (and other parameters such as default gateways). 0.x.y.z has had 16,777,215 addresses in 0.0.0.0/8 space left unused and reserved for future use, since 1989. The whole discussion of using allowing these IP address and making them available started early this year at the NetDevConf 2019, The Technical Conference on Linux Networking. The conference took place in Prague, Czech Republic, from March 20th to 22nd, 2019. One of the sessions, “Potential IPv4 Unicast Expansions”, conducted by  Dave Taht, along with John Gilmore, and Paul Wouters explains how IPv4 success story was in carrying unicast packets worldwide. The speakers say, service sites still need IPv4 addresses for everything, since the majority of Internet client nodes don't yet have IPv6 addresses. IPv4 addresses now cost 15 to 20 dollars apiece (times the size of your network!) and the price is rising. In their keynote, they described, the IPv4 address space includes hundreds of millions of addresses reserved for obscure (the ranges 0/8, and 127/16), or obsolete (225/8-231/8) reasons, or for "future use" (240/4 - otherwise known as class E). They highlighted the fact: “instead of leaving these IP addresses unused, we have started an effort to make them usable, generally. This work stalled out 10 years ago, because IPv6 was going to be universally deployed by now, and reliance on IPv4 was expected to be much lower than it in fact still is”. “We have been reporting bugs and sending patches to various vendors. For Linux, we have patches accepted in the kernel and patches pending for the distributions, routing daemons, and userland tools. Slowly but surely, we are decontaminating these IP addresses so they can be used in the near future. Many routers already handle many of these addresses, or can easily be configured to do so, and so we are working to expand unicast treatment of these addresses in routers and other OSes”, they further mentioned. They said they wanted to carry out an “authorized experiment to route some of these addresses globally, monitor their reachability from different parts of the Internet, and talk to ISPs who are not yet treating them as unicast to update their networks”. Here’s the patch code for 0.0.0.0/8 for Linux: Users have a mixed reaction to this announcement and assumed that these addresses would be unassigned forever. A few are of the opinion that for most business, IPv6 is an unnecessary headache. A user explained the difference between the address ranges in a reply to Jeremy Stretch’s (a network engineer) post, “0.0.0.0/8 - Addresses in this block refer to source hosts on "this" network. Address 0.0.0.0/32 may be used as a source address for this host on this network; other addresses within 0.0.0.0/8 may be used to refer to specified hosts on this network [RFC1700, page 4].” A user on Reddit writes, this announcement will probably get “the same reaction when 1.1.1.1 and 1.0.0.1 became available, and AT&T blocked it 'by accident' or most equipment vendors or major ISP will use 0.0.0.0/8 as a loopback interface or test interface because they never thought it would be assigned to anyone.” Another user on Elegant treader writes, “I could actually see us successfully inventing, and implementing, a multiverse concept for ipv4 to make these 32 bit addresses last another 40 years, as opposed to throwing these non-upgradable, hardcoded v4 devices out”. Another writes, if they would have “taken IPv4 and added more bits - we might all be using IPv6 now”. The user further mentions, “Instead they used the opportunity to cram every feature but the kitchen sink in there, so none of the hardware vendors were interested in implementing it and the backbones were slow to adopt it. So we got mass adoption of NAT instead of mass adoption of IPv6”. A user explains, “A single /8 isn’t going to meaningfully impact the exhaustion issues IPv4 faces. I believe it was APNIC a couple of years ago who said they were already facing allocation requests equivalent to an /8 a month”. “It’s part of the reason hand-wringing over some of the “wasteful” /8s that were handed out to organizations in the early days is largely pointless. Even if you could get those orgs to consolidate and give back large useable ranges in those blocks, there’s simply not enough there to meaningfully change the long term mismatch between demand and supply”, the user further adds. To know about these developments in detail, watch Dave Taht’s keynote video on YouTube: https://www.youtube.com/watch?v=92aNK3ftz6M&feature=youtu.be An attack on SKS Keyserver Network, a write-only program, poisons two high-profile OpenPGP certificates Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more! Amazon adds UDP load balancing support for Network Load Balancer
Read more
  • 0
  • 0
  • 18622

article-image-deploying-zabbix-proxy
Packt
11 Sep 2015
12 min read
Save for later

Deploying a Zabbix proxy

Packt
11 Sep 2015
12 min read
In this article by Andrea Dalle Vacche, author of the book Mastering Zabbix, Second Edition, you will learn the basics on how to deploy a Zabbix proxy on a Zabbix server. (For more resources related to this topic, see here.) A Zabbix proxy is compiled together with the main server if you add --enable-proxy to the compilation options. The proxy can use any kind of database backend, just as the server does, but if you don't specify an existing DB, it will automatically create a local SQLite database to store its data. If you intend to rely on SQLite, just remember to add --with-sqlite3 to the options as well. When it comes to proxies, it's usually advisable to keep things light and simple as much as we can; of course, this is valid only if the network design permits us to take this decision. A proxy DB will just contain configuration and measurement data that, under normal circumstances, is almost immediately synchronized with the main server. Dedicating a full-blown database to it is usually an overkill, so unless you have very specific requirements, the SQLite option will provide the best balance between performance and ease of management. If you didn't compile the proxy executable the first time you deployed Zabbix, just run configure again with the options you need for the proxies: $ ./configure --enable-proxy --enable-static --with-sqlite3 --with-net-snmp --with-libcurl --with-ssh2 --with-openipmi In order to build the proxy statically, you must have a static version of every external library needed. The configure script doesn't do this kind of check. Compile everything again using the following command: $ make Be aware that this will compile the main server as well; just remember not to run make install, nor copy the new Zabbix server executable over the old one in the destination directory. The only files you need to take and copy over to the proxy machine are the proxy executable and its configuration file. The $PREFIX variable should resolve to the same path you used in the configuration command (/usr/local by default): # cp src/zabbix_proxy/zabbix_proxy $PREFIX/sbin/zabbix_proxy # cp conf/zabbix_proxy.conf $PREFIX/etc/zabbix_proxy.conf Next, you need to fill out relevant information in the proxy's configuration file. The default values should be fine in most cases, but you definitely need to make sure that the following options reflect your requirements and network status: ProxyMode=0 This means that the proxy machine is in an active mode. Remember that you need at least as many Zabbix trappers on the main server as the number of proxies you deploy. Set the value to 1 if you need or prefer a proxy in the passive mode. The following code captures this discussion: Server=n.n.n.n This should be the IP number of the main Zabbix server or of the Zabbix node that this proxy should report to: Hostname=Zabbix proxy This must be a unique, case-sensitive name that will be used in the main Zabbix server's configuration to refer to the proxy: LogFile=/tmp/zabbix_proxy.log LogFileSize=1 DebugLevel=2 If you are using a small, embedded machine, you may not have much disk space to spare. In that case, you may want to comment all the options regarding the log file and let syslog send the proxy's log to another server on the Internet: # DBHost= # DBSchema= # DBUser= # DBPassword= # DBSocket= # DBPort= We need now create the SQLite database; this can be done with the following commands: $ mkdir –p /var/lib/sqlite/ $ sqlite3 /var/lib/sqlite/zabbix.db < /usr/share/doc/zabbix-proxy-sqlite3-2.4.4/create/schema.sql Now, in the DBName parameter, we need to specify the full path to our SQLite database: DBName=/var/lib/sqlite/zabbix.db The proxy will automatically populate and use a local SQLite database. Fill out the relevant information if you are using a dedicated, external database: ProxyOfflineBuffer=1 This is the number of hours that a proxy will keep monitored measurements if communications with the Zabbix server go down. Once the limit has been reached, the proxy will housekeep away the old data. You may want to double or triple it if you know that you have a faulty, unreliable link between the proxy and server. CacheSize=8M This is the size of the configuration cache. Make it bigger if you have a large number of hosts and items to monitor. Zabbix's runtime proxy commands There is a set of commands that you can run against the proxy to change runtime parameters. This set of commands is really useful if your proxy is struggling with items, in the sense that it is taking longer to deliver the items and maintain our Zabbix proxy up and running. You can force the configuration cache to get refreshed from the Zabbix server with the following: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R config_cache_reload This command will invalidate the configuration cache on the proxy side and will force the proxy to ask for the current configuration to our Zabbix server. We can also increase or decrease the log level quite easily at runtime with log_level_increase and log_level_decrease: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf –R log_level_increase This command will increase the log level for the proxy process; the same command also supports a target that can be PID, process type or process type, number here. What follow are a few examples. Increase the log level of the three poller process: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R log_level_increase=poller,3 Increase the log level of the PID to 27425: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R log_level_increase=27425 Increase or decrease the log level of icmp pinger or any other proxy processes with: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R log_level_increase="icmp pinger" zabbix_proxy [28064]: command sent successfully $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R log_level_decrease="icmp pinger" zabbix_proxy [28070]: command sent successfully We can quickly see the changes reflected in the log file here: 28049:20150412:021435.841 log level has been increased to 4 (debug) 28049:20150412:021443.129 Got signal [signal:10(SIGUSR1),sender_pid:28034,sender_uid:501,value_int:770(0x00000302)]. 28049:20150412:021443.129 log level has been decreased to 3 (warning) Deploying a Zabbix proxy using RPMs Deploying a Zabbix proxy using the RPM is a very simple task. Here, there are fewer steps required as Zabbix itself distributes a prepackaged Zabbix proxy that is ready to use. What you need to do is simply add the official Zabbix repository with the following command that must be run from root: $ rpm –ivh http://repo.zabbix.com/zabbix/2.4/rhel/6/x86_64/zabbix-2.4.4-1.el6.x86_64.rpm Now, you can quickly list all the available zabbix-proxy packages with the following command, again from root: $ yum search zabbix-proxy ============== N/S Matched: zabbix-proxy ================ zabbix-proxy.x86_64 : Zabbix Proxy common files zabbix-proxy-mysql.x86_64 : Zabbix proxy compiled to use MySQL zabbix-proxy-pgsql.x86_64 : Zabbix proxy compiled to use PostgreSQL zabbix-proxy-sqlite3.x86_64 : Zabbix proxy compiled to use SQLite3 In this example, the command is followed by the relative output that lists all the available zabbix-proxy packages; here, all you have to do is choose between them and install your desired package: $ yum install zabbix-proxy-sqlite3 Now, you've already installed the Zabbix proxy, which can be started up with the following command: $ service zabbix-proxy start Starting Zabbix proxy: [ OK ] Please also ensure that you enable your Zabbix proxy when the server boots with the $ chkconfig zabbix-proxy on command. That done, if you're using iptables, it is important to add a rule to enable incoming traffic on the 10051 port (that is the standard Zabbix proxy port) or, in any case, against the port that is specified in the configuration file: ListenPort=10051 To do that, you simply need to edit the iptables configuration file /etc/sysconfig/iptables and add the following line right on the head of the file: -A INPUT -m state --state NEW -m tcp -p tcp --dport 10051 -j ACCEPT Then, you need to restart your local firewall from root using the following command: $ service iptables restart The log file is generated at /var/log/zabbix/zabbix_proxy.log: $ tail -n 40 /var/log/zabbix/zabbix_proxy.log 62521:20150411:003816.801 **** Enabled features **** 62521:20150411:003816.801 SNMP monitoring: YES 62521:20150411:003816.801 IPMI monitoring: YES 62521:20150411:003816.801 WEB monitoring: YES 62521:20150411:003816.801 VMware monitoring: YES 62521:20150411:003816.801 ODBC: YES 62521:20150411:003816.801 SSH2 support: YES 62521:20150411:003816.801 IPv6 support: YES 62521:20150411:003816.801 ************************** 62521:20150411:003816.801 using configuration file: /etc/zabbix/zabbix_proxy.conf As you can quickly spot, the default configuration file is located at /etc/zabbix/zabbix_proxy.conf. The only thing that you need to do is make the proxy known to the server and add monitoring objects to it. All these tasks are performed through the Zabbix frontend by just clicking on Admin | Proxies and then Create. This is shown in the following screenshot: Please take care to use the same Proxy name that you've used in the configuration file, which, in this case, is ZabbixProxy; you can quickly check with: $ grep Hostname= /etc/zabbix/zabbix_proxy.conf # Hostname= Hostname=ZabbixProxy Note how, in the case of an Active proxy, you just need to specify the proxy's name as already set in zabbix_proxy.conf. It will be the proxy's job to contact the main server. On the other hand, a Passive proxy will need an IP address or a hostname for the main server to connect to, as shown in the following screenshot: You don't have to assign hosts to proxies at creation time or only in the proxy's edit screen. You can also do that from a host configuration screen, as follows: One of the advantages of proxies is that they don't need much configuration or maintenance; once they are deployed and you have assigned some hosts to one of them, the rest of the monitoring activities are fairly transparent. Just remember to check the number of values per second that every proxy has to guarantee as expressed by the Required performance column in the proxies' list page: Values per second (VPS) is the number of measurements per second that a single Zabbix server or proxy has to collect. It's an average value that depends on the number of items and the polling frequency for every item. The higher the value, the more powerful the Zabbix machine must be. Depending on your hardware configuration, you may need to redistribute the hosts among proxies or add new ones if you notice degraded performances coupled with high VPS. Considering a different Zabbix proxy database Nowadays, from Zabbix 2.4 the support for nodes has been discontinued, and the only distributed scenario available is limited to the Zabbix proxy; those proxies now play a truly critical role. Also, with proxies deployed in many different geographic locations, the infrastructure is more subject to network outages. That said, there is a case to consider which database we want to use for those critical remote proxies. Now SQLite3 is a good product as a standalone and lightweight setup, but if, in our scenario, the proxy we've deployed needs to retain a considerable amount of metrics, we need to consider the fact that SQLite3 has certain weak spots: The atomic-locking mechanism on SQLite3 is not the most robust ever SQLite3 suffers during high-volume writes SQLite3 does not implement any kind of user authentication mechanism Apart from the point that SQLite3 does not implement any kind of authentication mechanism, the database files are created with the standard unmask, due to which, they are readable by everyone, In the event of a crash during high load it is not the best database to use. Here is an example of the sqlite3 database and how to access it using a third-party account: $ ls -la /tmp/zabbix_proxy.db -rw-r--r--. 1 zabbix zabbix 867328 Apr 12 09:52 /tmp/zabbix_proxy.db ]# su - adv [adv@localhost ~]$ sqlite3 /tmp/zabbix_proxy.db SQLite version 3.6.20 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> Then, for all the critical proxies, it is advisable to use a different database. Here, we will use MySQL, which is a well-known database. To install the Zabbix proxy with MySQL, if you're compiling it from source, you need to use the following command line: $ ./configure --enable-proxy --enable-static --with-mysql --with-net-snmp --with-libcurl --with-ssh2 --with-openipmi This should be followed by the usual: $ make Instead, if you're using the precompiled rpm, you can simply run from root: $ yum install zabbix-proxy-mysql Now, you need to start up your MySQL database and create the required database for your proxy: $ mysql -uroot -p<password> $ create database zabbix_proxy character set utf8 collate utf8_bin; $ grant all privileges on zabbix_proxy.* to zabbix@localhost identified by '<password>'; $ quit; $ mysql -uzabbix -p<password> zabbix_proxy < database/mysql/schema.sql If you've installed using rpm, the previous command will be: $ mysql -uzabbix -p<password> zabbix_proxy < /usr/share/doc/zabbix-proxy-mysql-2.4.4/create/schema.sql/schema.sql Now, we need to configure zabbix_proxy.conf and add the proper value to those parameters: DBName=zabbix_proxy DBUser=zabbix DBPassword=<password> Please note that there is no need to specify DBHost as the socket used for MySQL. Finally, we can start up our Zabbix proxy with the following command from root: $ service zabbix-proxy start Starting Zabbix proxy: [ OK ] Summary In this article, you learned how to start up a Zabbix proxy over a Zabbix server. Resources for Article: Further resources on this subject: Zabbix Configuration[article] Bar Reports in Zabbix 1.8[article] Going beyond Zabbix agents [article]
Read more
  • 0
  • 0
  • 16143

article-image-configuring-freeswitch-webrtc
Packt
21 Jul 2015
12 min read
Save for later

Configuring FreeSWITCH for WebRTC

Packt
21 Jul 2015
12 min read
In the article written by Giovanni Maruzzelli, author of FreeSWITCH 1.6 Cookbook, we learn how WebRTC is all about security and encryption. Theye are not an afterthought. They're intimately interwoven at the design level and are mandatory. For example, you cannot stream audio or video clearly (without encryption) via WebRTC. (For more resources related to this topic, see here.) Getting ready To start with this recipe, you need certificates. These are the same kind of certificates used by web servers for SSL-HTTPS. Yes, you can be your own Certification Authority and self-sign your own certificate. However, this will add considerable hassles; browsers will not recognize the certificate, and you will have to manually instruct them to make a security exception and accept it, or import your own Certification Authority chain to the browser. Also, in some mobile browsers, it is not possible to import self-signed Certification Authorities at all. The bottom line is that you can buy an SSL certificate for less than $10, and in 5 minutes. (No signatures, papers, faxes, telephone calls… nothing is required. Only a confirmation email and a few bucks are enough.) It will save you much frustration, and you'll be able to cleanly showcase your installation to others. The same reasoning applies to DNS Full Qualified Domain Names (FQDN)—certificates belonging to FQDN's. You can put your DNS names in /etc/hosts, or set up an internal DNS server, but this will not work for mobile clients and desktops outside your control. You can register a domain, point an fqdn to your machine's public IP (it can be a Linode, an AWS VM, or whatever), and buy a certificate using that fqdn as Common Name (CN). Don't try to set up the WebRTC server on your internal LAN behind the same NAT that your clients are into (again, it is possible but painful). How to do it... Once you have obtained your certificate (be sure to download the Certification Authority Chain too, and keep your Private Key; you'll need it), you must concatenate those three elements to create the needed certificates for mod_sofia to serve SIP signaling via WSS and media via SRTP/DTLS. With certificates in the right place, you can now activate ssl in Sofia. Open /usr/local/freeswitch/conf/vars.xml: As you can see, in the default configuration, both lines that feature SSL are false. Edit them both to change them to true. How it works... By default, Sofia will listen on port 7443 for WSS clients. You may want to change this port if you need your clients to traverse very restrictive firewalls. Edit /usr/local/freeswitch/conf/sip-profiles/internal.xml and change the "wss-binding" value to 443. This number, 443, is the HTTPS (SSL) port, and is almost universally open in all firewalls. Also, wss traffic is indistinguishable from https/ssl traffic, so your signaling will pass through the most advanced Deep Packet Inspection. Remember that if you use port 443 for WSS, you cannot use that same port for HTTPS, so you will need to deploy your secure web server on another machine. There's more... A few examples of such a configuration are certificates, DNS, and STUN/TURN. Generally speaking, if you set up with real DNS names, you will not need to run your own STUN server; your clients can rely on Google STUN servers. But if you need a TURN server because some of your clients need a media relay (which is because they're behind and demented NAT got UDP blocked by zealous firewalls), install on another machine rfc5766-turn-server, and have it listen on TCP ports 443 and 80. You can also put certificates with it and use TURNS on encrypted connection. The same firewall piercing properties as per signaling. SIP signaling in JavaScript with SIP.js (WebRTC client) Let's carry out the most basic interaction with a web browser audio/video through WebRTC. We'll start using SIP.js, which uses a protocol very familiar to all those who are old hands at VoIP. A web page will display a click-to-call button, and anyone can click for inquiries. That call will be answered by our company's PBX and routed to our employee extension (1010). Our employee will wait on a browser with the "answer" web page open, and will automatically be connected to the incoming call (if our employee does not answer, the call will go to their voicemail). Getting ready To use this example, download version 0.7.0 of the SIP.js JavaScript library from www.sipjs.com. We need an "anonymous" user that we can allow into our system without risks, that is, a user that can do only what we have preplanned. Create an anonymous user for click-to-call in a file named /usr/local/freeswitch/conf/directory/default/anonymous.xml : <include> <user id="anonymous">    <params>      <param name="password" value="welcome"/>    </params>    <variables>      <variable name="user_context" value="anonymous"/>      <variable name="effective_caller_id_name" value="Anonymous"/>      <variable name="effective_caller_id_number" value="666"/>      <variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>      <variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>    </variables> </user> </include> Then add the user's own dialplan to /usr/local/freeswitch/conf/dialplan/anonymous.xml: <include> <context name="anonymous">    <extension name="public_extensions">      <condition field="destination_number" expression="^(10[01][0-9])$">        <action application="transfer" data="$1 XML default"/>      </condition>    </extension>    <extension name="conferences">      <condition field="destination_number" expression="^(36d{2})$">        <action application="answer"/>        <action application="conference" data="$1-${domain_name}@video-mcu"/>      </condition>    </extension>    <extension name="echo">      <condition field="destination_number" expression="^9196$">        <action application="answer"/>        <action application="echo"/>      </condition>    </extension> </context> </include> How to do it... In a directory served by your HTPS server (for example, Apache with an SSL certificate), put all the following files. Minimal click-to-call caller client HTML (call.html): <html> <body>        <button id="startCall">Start Call</button>        <button id="endCall">End Call</button>        <br/>        <video id="remoteVideo"></video>        <br/>        <video id="localVideo" muted="muted" width="128px" height="96px"></video>        <script src="js/sip-0.7.0.min.js"></script>        <script src="call.js"></script> </body> </html> JAVASCRIPT (call.js): var session;   var endButton = document.getElementById('endCall'); endButton.addEventListener("click", function () {        session.bye();        alert("Call Ended"); }, false);   var startButton = document.getElementById('startCall'); startButton.addEventListener("click", function () {        session = userAgent.invite('sip:1010@gmaruzz.org', options);        alert("Call Started"); }, false);   var userAgent = new SIP.UA({        uri: 'anonymous@gmaruzz.org',        wsServers: ['wss://self2.gmaruzz.org:7443'],        authorizationUser: 'anonymous',        password: 'welcome' });   var options = {        media: {                constraints: {                        audio: true,                        video: true                },                render: {                        remote: document.getElementById('remoteVideo'),                        local: document.getElementById('localVideo')                }        } }; Minimal callee HTML (answer.html): <html> <body>        <button id="endCall">End Call</button>        <br/>        <video id="remoteVideo"></video>        <br/>        <video id="localVideo" muted="muted" width="128px" height="96px"></video>        <script src="js/sip-0.7.0.min.js"></script>        <script src="answer.js"></script> </body> </html> JAVASCRIPT (answer.js): var session;   var endButton = document.getElementById('endCall'); endButton.addEventListener("click", function () {        session.bye();        alert("Call Ended"); }, false);   var userAgent = new SIP.UA({        uri: '1010@gmaruzz.org',        wsServers: ['wss://self2.gmaruzz.org:7443'],        authorizationUser: '1010',        password: 'ciaociao' });   userAgent.on('invite', function (ciapalo) {        session = ciapalo;        session.accept({                media: {                        constraints: {                               audio: true,                                video: true                        },                        render: {                                remote: document.getElementById('remoteVideo'),                                local: document.getElementById('localVideo')                        }                  }        }); }); How it works... Our employee (the callee, or the person who will answer the call) will sit tight with the answer.html web page open on their browser. Upon page load, JavaScript will have created the SIP agent and registered it with our FreeSWITCH server as SIP user "1010" (just as our employee was on their own regular SIP phone). Our customer (the caller, or the person who initiates the communication) will visit the call.html webpage (while loading, this web page will register as an SIP "anonymous" user with FreeSWITCH), and then click on the Start Call button. This clicking will activate the JavaScript that creates the communication session using the invite method of the user agent, passing as an argument the SIP address of our employee. The Invite method initiates a call, and our FreeSWITCH server duly invites SIP user 1010. That happens to be the answer.html web page our employee is in front of. The Invite method sent from FreeSWITCH to answer.html will activate the JavaScript local user agent, which will create the session and accept the call. At this moment, the caller and callee are connected, and voice and video will begin to flow back and forth. The received audio or video stream will be rendered by the RemoteVideo tag in the web page, while its own stream (the video that is sent to the peer) will show up locally in the little localVideo tag. That's muted not to generate Larsen whistles. See also The Configuring a SIP phone to register with FreeSWITCH recipe in Chapter 2, Connecting Telephones and Service Providers, and the documentation at http://sipjs.com/guides/.confluence/display/FREESWITCH/mod_verto Summary This article features the new disruptive technology that allows real-time audio/video/data-secure communication from hundreds of millions of browsers. FreeSWITCH is ready to serve as a gateway and an application server. Resources for Article: Further resources on this subject: WebRTC with SIP and IMS [article] Architecture of FreeSWITCH [article] Calling your fellow agents [article]
Read more
  • 0
  • 0
  • 15421
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-installing-arch-linux-using-official-iso
Packt
19 Feb 2013
7 min read
Save for later

Installing Arch Linux using the official ISO

Packt
19 Feb 2013
7 min read
(For more resources related to this topic, see here.) Getting ready You can get the official ISO image file from https://www.archlinux.org/download/. On this page you will find a download link to the latest release. Depending on your preference, download the torrent file or the ISO image file immediately. The following list describes the main tasks that we will perform in this recipe: Preparing, booting, and setting keyboard layout: We are going to get the ISO file from the download page of the Arch Linux website and store it on the preferred media of our choice. At the time of writing this article, there is a dual ISO image file that contains both i686 and x86-64 architectures on one disk. Start your PC with your preferred installation media (CD or USB stick). On most PC systems, you can access the boot menu by pressing one of the function keys, usually between F8 and F12 depending on the motherboard manufacturer. On older machines where you do not yet have a boot menu, you might need to change the boot order in the BIOS where the CD-ROM (or DVD/Blu-ray) has to be chosen as the first device to try booting from. We'll also explain how to use a different keyboard layout than the default one in this recipe. Creating, formatting, and mounting partitions: You can partition the disks the way you want using cfdisk (for MBR disk partitioning) or cgdisk (for GUID disk partitioning). After creating the partitions, we can choose to format our created partitions with specific filesystems. When all partitions are formatted, we need to mount the partitions. First we will mount the root partition to /mnt. The other partitions will be mounted later on after you have created the specific folders. We'll designate our device with /dev/sdX; in your case this can be /dev/sda, and so on. Connecting to the Internet: To be able to continue installing the ISO you need to connect to the Internet, because there are no packages available for installation on the ISO. For a wireless network you will need to use netcfg. When connected to a wired network, just use dhcpcd or dhclient. Installing the base system and boot loader: These days the base system gets installed by running a simple script pacstrap. Pacstrap takes multiple parameters, the target location, and the packages or groups you want to install. For people who want to develop on their machines, the best base install is adding base-devel to the default installation. For normal end users, just base will be sufficient to start. Configuring the system: In this recipe, we'll describe the flow of what to do during the configuration. How to do it... The following steps will guide you in preparing, booting, and setting keyboard layout: Once you have downloaded the ISO image file, you should also verify its integrity by downloading the sha1sums.txt file from the download page. These days you can also check if the ISO is completely valid by verifying the signature of the ISO. Verify the integrity by issuing the sha1sum -c sha1sums.txt command and you'll see whether your download was successful or not. Also check if the signature of the ISO is correct by running gpg -v archlinux-...iso.sig: sha1sum -c sha1sums.txt gpg -v archlinux-2012-08-04-dual.iso.sig The following screenshot shows the execution of this step: As you can see in the previous screenshot, the ISO's checksum is ok and the signature is valid. Now that we are sure our ISO is ok, we can burn this to a CD with our favorite burning program. Insert the CD into the drive, or insert the USB stick into the USB port of your PC. Enter the boot menu, or let your computer automatically boot from the inserted installation media. If the previous steps are performed correctly, you will see the following screenshot: Select the architecture you want and press Enter, and we'll be on our way. Search the keyboard layout desired for your region. The available keyboard layouts can be found at /usr/share/kbd/keymaps/. Set the desired keyboard layout with loadkeys keyboardlayout. Now let's perform the following steps to create, format, and mount partitions: Start cfdisk or cgdisk, having the first parameter as the device you want to partition: cfdisk /dev/sdX cgdisk /dev/sdX Create your partition scheme. Store the partition scheme. Use the mkfs command to create a filesystem on a specific partition: mkfs -t vfat /dev/sdX mkfs.ext4 -L root /dev/sdX Mount your root partition to /mnt: mount /dev/sdX3 /mnt Make directories under mount for your other partitions: mkdir -p /mnt/boot Mount the other partitions: mount /dev/sdX1 /mnt/boot The following steps are needed to connect to the Internet: When we need a wireless network, create a netcfg profile and run netcfg mywireless. Use dhclient or dhcpcd to get an IP address. The following steps should be performed for installing the base system and boot loader: Run pacstrap with the desired parameters: pacstrap /mnt base base-devel Install the desired boot loader: the best choice at this moment is Syslinux. The final installation of the boot loader will be done in a chroot during the initial configuration. We'll now list the steps to do during the configuration: Generate fstab with genfstab: genfstab -p /mnt >> /mnt/etc/fstab Change the root into the system location: arch-chroot /mnt Set your hostname in /etc/hostname. Create /etc/localtime symlink. Set your locale in /etc/locale.conf. Uncomment the configured locale in /etc/locale.gen. Run locale-gen. Configure /etc/mkinitcpio.conf. Generate your initial ramdisk: mkinitcpio -p linux Finish installation of your boot loader. Set the root password with passwd. Leave the chroot environment (exit). How it works... We downloaded the ISO image file via torrent, or via HTTP from the mirror sites listed on the download page. The sha1sum command lets us verify the integrity of the downloaded ISO. On top of the checksum, we can also check the integrity by verifying the signature available for the ISO. So now, we can rest assured that the downloaded file is the real one. The ISO contains a fully working operating system. It also contains all the necessary tools to perform system recovery and installation. The keyboard configuration set with loadkeys will make sure that the key you press on your keyboard will be translated to the correct letter on your screen. Using a different keyboard layout from the one on your physical keyboard might be confusing. We then created a partition scheme on the selected disk with the appropriate tool (cfdisk or cgdisk). Make Filesystem (mkfs) is a unified frontend to create a filesystem. Using it we created our filesystem layout manually under/mnt by creating our default partition layout in our root, and mounting the specific partitions accordingly. You can make a connection with your wireless network (if needed), and then use dhcpcd or dhclient to obtain an IP address that enables you to access the Internet. Pacstrap will run pacman with a modified root location to install the desired packages into the newly created system. For example, installing Syslinux: pacstrap /mnt syslinux The specific configuration files will ensure we don't have to do all those steps over and over again on every boot. Summary This article explained the procedure to get Arch Linux installed on your system using the official installation media. Resources for Article : Further resources on this subject: Compression Formats in Linux Shell Script [Article] Making a Complete yet Small Linux Distribution [Article] Linux Shell Script: Tips and Tricks [Article]
Read more
  • 0
  • 0
  • 14927

article-image-zabbix-configuration
Packt
16 Mar 2015
18 min read
Save for later

Zabbix Configuration

Packt
16 Mar 2015
18 min read
In this article by Patrik Uytterhoeven, author of the book Zabbix Cookbook, we will see the following topics: Server installation and configuration Agent installation and configuration Frontend installation and configuration (For more resources related to this topic, see here.) We will begin with the installation and configuration of a Zabbix server, Zabbix agent, and web interface. We will make use of our package manager for the installation. Not only will we show you how to install and configure Zabbix, we will also show you how to compile everything from source. We will also cover the installation of the Zabbix server in a distributed way. Server installation and configuration Here we will explain how to install and configure the Zabbix server, along with the prerequisites. Getting ready To get started, we need a properly configured server, with a Red Hat 6.x or 7.x 64-bit OS installed or a derivate such as CentOS. It is possible to get the installation working on other distributions such as SUSE, Debian, Ubuntu, or another Linux distribution, but I will be focusing on Red Hat based systems. I feel that it's the best choice as the OS is not only available for big companies willing to pay Red Hat for support, but also for those smaller companies that cannot afford to pay for it, or for those just willing to test it or run it with community support. Other distros like Debian, Ubuntu, SUSE, OpenBSD will work fine too. It is possible to run Zabbix on 32-bit systems, but I will only focus on 64-bit installations as 64-bit is probably what you will run in a production setup. However if you want to try it on 32-bit system, it is perfectly possible with the use of the Zabbix 32-bit binaries. How to do it... The following steps will guide you through the server installation process: The first thing we need to do is add the Zabbix repository to our package manager on our server so that we are able to download the Zabbix packages to set up our server. To find the latest repository, go to the Zabbix webpage www.zabbix.com and click on Product | Documentation then select the latest version. At the time of this writing, it is version 2.4. From the manual, select option 3 Installation, then go to option 3 Installation from packages and follow instructions to install the Zabbix repository. Now that our Zabbix repository is installed, we can continue with our installation. For our Zabbix server to work, we will also need a database for example: MySQL, PostgreSQL, Oracle and a web server for the frontend such as Apache, Nginx, and so on. In our setup, we will install Apache and MySQL as they are better known and easiest to set up. There is a bit of a controversy around MySQL that was acquired by Oracle some time ago. Since then, most of the original developers left and forked the project. Those forks have also made major improvements over MySQL. It could be a good alternative to make use of MariaDB or Percona. In Red Hat Enterprise Linux (RHEL) 7.x, MySQL has been replace already by MariaDB. http://www.percona.com/. https://mariadb.com/. http://www.zdnet.com/article/stallman-admits-gpl-flawed-proprietary-licensing-needed-to-pay-for-mysql-development/. The following steps will show you how to install the MySQL server and the Zabbix server with a MySQL connection: # yum install mysql-server zabbix-server-mysql # yum install mariadb-server zabbix-server-mysql (for RHEL 7) # service mysqld start # systemctl start mariadb.service (for RHEL 7) # /usr/bin/mysql_secure_installation We make use of MySQL because it is what most people know best and use most of the time. It is also easier to set up than PostgreSQL for most people. However, a MySQL DB will not shrink in size. It's probably wise to use PostgreSQL instead, as PostgreSQL has a housekeeper process that cleans up the database. However, in very large setups this housekeeper process of PostgreSQL can at times also be the problem of slowness. When this happens, a deeper understanding of how housekeeper works is needed. MySQL will come and ask us some questions here so make sure you read the next lines before you continue: For the MySQL secure installation, we are being asked to give the current root password or press Enter if you don't have one. This is the root password for MySQL and we don't have one yet as we did a clean installation of MySQL. So you can just press Enter here. Next question will be to set a root password; best thing is of course, to set a MySQL root password. Make it a complex one and store it safe in a program such as KeePass or Password Safe. After the root password is set, MySQL will prompt you to remove anonymous users. You can select Yes and let MySQL remove them. We also don't need any remote login of root users, so best is to disallow remote login for the root user as well. For our production environment, we don't need any test databases left on our server. So those can also be removed from our machine and finally we do a reload of the privileges. You can now continue with the rest of the configuration by configuring our database and starting all the services. This way we make sure they will come up again when we restart our server: # mysql -u root -p mysql> create database zabbix character set utf8 collate utf8_bin;mysql> grant all privileges on zabbix.* to zabbix@localhost identified by '<some-safe-password>';mysql> exit; # cd /usr/share/doc/zabbix-server-mysql-2.4.x/create# mysql -u zabbix -p zabbix < schema.sql# mysql -u zabbix -p zabbix < images.sql# mysql -u zabbix -p zabbix < data.sql Depending on the speed of your machine, importing the schema could take some time (a few minutes). It's important to not mix the order of the import of the SQL files! Now let's edit the Zabbix server configuration file and add our database settings in it: # vi /etc/zabbix/zabbix_server.confDBHost=localhostDBName=zabbixDBUser=zabbixDBPassword=<some-safe-password> Let's start our Zabbix server and make sure it will come online together with the MySQL database after reboot: # service zabbix-server start # chkconfig zabbix-server on # chkconfig mysqld on On RHEL 7 this will be: # systemctl start zabbix-server # systemctl enable zabbix-server # systemctl enable mariadb Check now if our server was started correctly: # tail /var/log/zabbix/zabbix_server.log The output would look something like this: # 1788:20140620:231430.274 server #7 started [poller #5]# 1804:20140620:231430.276 server #19 started [discoverer #1] If no errors where displayed in the log, your zabbix-server is online. In case you have errors, they will probably look like this: 1589:20150106:211530.180 [Z3001] connection to database 'zabbix' failed: [1045] Access denied for user 'zabbix'@'localhost' (using password: YES)1589:20150106:211530.180 database is down: reconnecting in 10 seconds In this case, go back to the zabbix_server.conf file and check the DBHost, DBName, DBUser, and DBPassword parameters again to see if they are correct. The only thing that needs to be done is editing the firewalld. Add the following line in the /etc/sysconfig/iptables file under the line with dport 22. This can be done with vi or Emacs or another editor such as: the vi /etc/sysconfig/iptables file. If you would like to know more about iptables have a look at the CentOS wiki In this case, go back to the zabbix_server.conf file and check the DBHost, DBName, DBUser, and DBPassword parameters again to see if they are correct. The only thing that needs to be done is editing the firewalld. Add the following line in the /etc/sysconfig/iptables file under the line with dport 22. This can be done with vi or Emacs or another editor such as: the vi /etc/sysconfig/iptables file. If you would like to know more about iptables have a look at the CentOS wiki. # -A INPUT -m state --state NEW -m tcp -p tcp --dport 10051 -j ACCEPT People making use of RHEL 7 have firewall and need to run following commands instead: # firewall-cmd --permanent --add-port=10051/tcp Now that this is done, you can reload the firewall. The Zabbix server is installed and we are ready to continue to the installation of the agent and the frontend: # service iptables restart # firewall-cmd --reload (For users of RHEL 7) Always check if the ports 10051 and 10050 are also in your /etc/services file both server and agent are IANA registered. How it works... The installation we have done here is just for the Zabbix server and the database. We still need to add an agent and a frontend with a web server. The Zabbix server will communicate through the local socket with the MySQL database. Later, we will see how we can change this if we want to install MySQL on another server than our Zabbix server. The Zabbix server needs a database to store its configuration and the received data, for which we have installed a MySQL database. Remember we did a create database and named it zabbix? Then we did a grant on the zabbix database and we gave all privileges on this database to a user with the name zabbix with some free to choose password <some-safe-password>. After the creation of the database we had to upload three files namely schema.sql, images.sql, and data.sql. Those files contain the database structure and data for the Zabbix server to work. It is very important that you keep the correct order when you upload them to your database. The next thing we did was adjusting the zabbix_server.conf file; this is needed to let our Zabbix server know what database we have used with what credentials and where the location is. The next thing we did was starting the Zabbix server and making sure that with a reboot, both MySQL and the Zabbix server would start up again. Our final step was to check the log file to see if the Zabbix server was started without any errors and the opening of TCP port 10051 in the firewall. Port 10051 is the port being used by Zabbix active agents to communicate with the server. There's more... We have changed some settings for the communication with our database in the /etc/zabbix/zabbix_server.conf file but there are many more options in this file to set. So let's have a look at which are the other options that we can change. The following URL gives us an overview of all supported parameters in the zabbix_server.conf file: https://www.zabbix.com/documentation/2.4/manual/appendix/config/zabbix_server. You can start the server with another configuration file so you can experiment with multiple configuration settings. This can be useful if you like to experiment with certain options. To do this, run the following command where the <config file> file is another zabbix_server.conf file than the original file: zabbix_server -c <config file> See also http://www.greensql.com/content/mysql-security-best-practices-hardening-mysql-tips http://passwordsafe.sourceforge.net/ http://keepass.info/ http://www.fpx.de/fp/Software/Gorilla/ http://wiki.centos.org/HowTos/Network/IPTables Agent installation and configuration In this section, we will explain you the installation and configuration of the Zabbix agent. The Zabbix agent is a small piece of software about 700 KB in size. You will need to install this agent on all your servers to be able to monitor the local resources with Zabbix. Getting ready In this recipe, to get our Zabbix agent installed, we need to have our server with the Zabbix server up and running. In this setup, we will install our agent first on the Zabbix server. We just make use of the Zabbix server in this setup to install a server that can monitor itself. If you monitor another server then there is no need to install a Zabbix server, only the agent is enough. How to do it... Installing the Zabbix agent is quite easy once our server has been set up. The first thing we need to do is install the agent package. Installing the agent packages can be done by running yum. In case you have skipped it, then go back and add the Zabbix repository to your package manager. Install the Zabbix agent from the package manager: # yum install zabbix-agent Open the correct port in your firewall. The Zabbix server communicates to the agent if the agent is passive. So, if your agent is on a server other than Zabbix, then we need to open the firewall on port 10050. Edit the firewall, open the file /etc/sysconfig/iptables and add the following after the line with dport 22 in the next line: # -A INPUT -m state --state NEW -m tcp -p tcp --dport 10050 -j ACCEPT Users of RHEL 7 can run: # firewall-cmd --permanent --add-port=10050/tcp Now that the firewall is adjusted, you can restart the same: # service iptables restart # firewall-cmd --reload (if you use RHEL 7) The only thing left to do is edit the zabbix_agentd.conf file, start the agent, and make sure it starts after a reboot. Edit the Zabbix agent configuration file and add or change the following settings: # vi /etc/zabbix/zabbix_agentd.conf Server=<ip of the zabbix server> ServerActive=<ip of the zabbix server> That's all for now in order to edit in the zabbix_agentd.conf file. Now, let's start the Zabbix agent: # service zabbix-agent start # systemctl start zabbix-agent (if you use RHEL 7) And finally make sure that our agent will come online after a reboot: # chkconfig zabbix-agent on # systemctl enable zabbix-agent (for RHEL 7 users) Check again that there are no errors in the log file from the agent: # tail /var/log/zabbix/zabbix_agentd.log How it works... The agent we have installed is installed from the Zabbix repository on the Zabbix server, and communicates to the server on port 10051 if we make use of an active agent. If we make use of a passive agent, then our Zabbix server will talk to the Zabbix agent on port 10050. Remember that our agent is installed locally on our host; so all communication stays on our server. This is not the case if our agent is installed on another server instead of our Zabbix server. We have edited the configuration file from the agent and changed the Server and ServerActive options. Our Zabbix agent is now ready to communicate with our Zabbix server. Based on the two parameters we have changed, the agent knows what the IP is from the Zabbix server. The difference between passive and active modes is that the client in passive mode will wait for the Zabbix server to ask for data from the Zabbix agent. The agent in active mode will ask the server first what it needs to monitor and pull this data from the Zabbix server. From that moment on, the Zabbix agent will send the values by itself to the server at regular intervals. So when we use a passive agent the Zabbix server pulls the data from the agent where a active agent pushes the data to the server. We did not change the hostname item in the zabbix_agentd.conf file, a parameter we normally need to change and give the host a unique name. In our case the name in the agent will already be in the Zabbix server that we have installed, so there is no need to change it this time. There's more... Just like our server, the agent has a plenty more options to set in its configuration file. So open the file again and have a look at what else we can adjust. In the following URLs you will find all options that can be changed in the Zabbix agent configuration file for Unix and Windows: https://www.zabbix.com/documentation/2.4/manual/appendix/config/zabbix_agentd. https://www.zabbix.com/documentation/2.4/manual/appendix/config/zabbix_agentd_win. Frontend installation and configuration In this recipe, we will finalize our setup with the installation and configuration of the Zabbix web interface. Our Zabbix configuration is different from other monitoring tools such as Nagios in the way that the complete configuration is stored in a database. This means, that we need a web interface to be able to configure and work with the Zabbix server. It is not possible to work without the web interface and just make use of some text files to do the configuration. Getting ready To be successful with this installation, you need to have installed the Zabbix server. It's not necessary to have the Zabbix client installed but it is recommended. This way, we can monitor our Zabbix server because we have a Zabbix agent running on our Zabbix server. This can be useful in monitoring your own Zabbix servers health status. How to do it... The first thing we need to do is go back to our prompt and install the Zabbix web frontend packages. # yum install zabbix-web zabbix-web-mysql With the installation of our Zabbix-web package, Apache was installed too, so we need to start Apache first and make sure it will come online after a reboot: # chkconfig httpd on; service start httpd # systemctl start httpd; systemctl enable httpd (for RHEL 7) Remember we have a firewall, so the same rule applies here. We need to open the port for the web server to be able to see our Zabbix frontend. Edit the /etc/sysconfig/iptables firewall file and add after the line with dport 22 in the next line: # -A INPUT -m state –state NEW -m tcp -p tcp –dport 80 -j ACCEPT If iptables is too intimidating for you, then an alternative could to make use of Shorewall. http://www.cyberciti.biz/faq/centos-rhel-shorewall-firewall-configuration-setup-howto-tutorial/. Users of RHEL 7 can run the following lines: # firewall-cmd --permanent --add-service=http The following screenshot shows the firewall configuration: Now that the firewall is adjusted, you can save and restart the firewall: # iptables-save # service iptables restart # firewall-cmd --reload (If you run RHEL 7) Now edit the Zabbix configuration file with the PHP setting. Uncomment the option for the timezone and fill in the correct timezone: # vi /etc/httpd/conf.d/zabbix.conf php_value date.timezone Europe/Brussels It is now time to reboot our server and see if everything comes back online with our Zabbix server configured like we intended it to. The reboot here is not necessary but it's a good test to see if we did a correct configuration of our server: # reboot Now let's see if we get to see our Zabbix server. Go to the URL of our Zabbix server that we just have installed: # http://<ip of the Zabbix server>/zabbix On the first page, we see our welcome screen. Here, we can just click Next: The standard Zabbix installation will run on port 80, although It isn't really a safe solution. It would be better to make use of HTTPS. However, this is a bit out of the scope but could be done with not too much extra work and would make Zabbix more safe. http://wiki.centos.org/HowTos/Https. Next screen, Zabbix will do a check of the PHP settings. Normally they should be fine as Zabbix provides a file with all correct settings. We only had to change the timezone parameter, remember? In case something goes wrong, go back to the zabbix.conf file and check the parameters: Next, we can fill in our connection details to connect to the database. If you remember, we did this already when we installed the server. Don't panic, it's completely normal. Zabbix, as we will see later, can be setup in a modular way so the frontend and the server both need to know where the database is and what the login credentials are. Press Test connection and when you get an OK just press Next again: Next screen, we have to fill in some Zabbix server details. Host and port should already be filled in; if not, put the correct IP and port in the fields. The field Name is not really important for the working of our Zabbix server but it's probably better to fill in a meaningful name here for your Zabbix installation: Now our setup is finished, and we can just click Next till we get our login screen. The Username and Password are standard the first time we set up the Zabbix server and are Admin for Username and zabbix for the Password: Summary In this article we saw how to install and configure Zabbix server, Zabbix agent, and web interface. We also learnt the commands for MySQL server and the Zabbix server with a MySQL connection. Then we went through the installation and configuration of Zabbix agent. Further we learnt to install the Zabbix web frontend packages and installing the firewall packages. Finally the we saw the steps for installation and configuration of Zabbix through screenshots. By learning the basics of Zabbix, we can now proceed with this technology. Resources for Article: Further resources on this subject: Going beyond Zabbix agents [article] Using Proxies to Monitor Remote Locations with Zabbix 1.8 [article] Triggers in Zabbix 1.8 [article]
Read more
  • 0
  • 0
  • 14832

article-image-opendaylight-fundamentals
Packt
05 Jul 2017
14 min read
Save for later

OpenDaylight Fundamentals

Packt
05 Jul 2017
14 min read
In this article by Jamie Goodyear, Mathieu Lemay, Rashmi Pujar, Yrineu Rodrigues, Mohamed El-Serngawy, and Alexis de Talhouët the authors of the book OpenDaylight Cookbook, we will be covering the following recipes: Connecting OpenFlow switches Mounting a NETCONF device Browsing data models with Yang UI (For more resources related to this topic, see here.) OpenDaylight is a collaborative platform supported by leaders in the networking industry and hosted by the Linux Foundation. The goal of the platform is to enable the adoption of software-defined networking (SDN) and create a solid base for network functions virtualization (NFV). Connecting OpenFlow switches OpenFlow is a vendor-neutral standard communications interface defined to enable the interaction between the control and forwarding channels of an SDN architecture. The OpenFlow plugin project intends to support implementations of the OpenFlow specification as it evolves. It currently supports OpenFlow versions 1.0 and 1.3.2. In addition, to support the core OpenFlow specification, OpenDaylight Beryllium also includes preliminary support for the Table Type Patterns and OF-CONFIG specifications. The OpenFlow southbound plugin currently provides the following components: Flow management Group management Meter management Statistics polling Let's connect an OpenFlow switch to OpenDaylight. Getting ready This recipe requires an OpenFlow switch. If you don't have any, you can use a mininet-vm with OvS installed. You can download mininet-vm from the website: https://github.com/mininet/mininet/wiki/Mininet-VM-Images. Any version should work The following recipe will be presented using a mininet-vm with OvS 2.0.2. How to do it... Start the OpenDaylight distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an OpenFlow switch: opendaylight-user@root>feature:install odl-openflowplugin-all It might take a minute or so to complete the installation. Connect an OpenFlow switch to OpenDaylight.we will use mininet-vm as our OpenFlow switch as this VM runs an instance of OpenVSwitch:     Login to mininet-vm using:  Username: mininet   Password: mininet    Let's create a bridge: mininet@mininet-vm:~$ sudo ovs-vsctl add-br br0 Now let's connect OpenDaylight as the controller of br0: mininet@mininet-vm:~$ sudo ovs-vsctl set-controller br0 tcp: ${CONTROLLER_IP}:6633    Let's look at our topology: mininet@mininet-vm:~$ sudo ovs-vsctl show 0b8ed0aa-67ac-4405-af13-70249a7e8a96 Bridge "br0" Controller "tcp: ${CONTROLLER_IP}:6633" is_connected: true Port "br0" Interface "br0" type: internal ovs_version: "2.0.2" ${CONTROLLER_IP} is the IP address of the host running OpenDaylight. We're establishing a TCP connection. Have a look at the created OpenFlow node.Once the OpenFlow switch is connected, send the following request to get information regarding the switch:    Type: GET    Headers:Authorization: Basic YWRtaW46YWRtaW4=    URL: http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/ This will list all the nodes under opendaylight-inventory subtree of MD-SAL that store OpenFlow switch information. As we connected our first switch, we should have only one node there. It will contain all the information the OpenFlow switch has, including its tables, its ports, flow statistics, and so on. How it works... Once the feature is installed, OpenDaylight is listening to connection on port 6633 and 6640. Setting up the controller on the OpenFlow-capable switch will immediately trigger a callback on OpenDaylight. It will create the communication pipeline between the switch and OpenDaylight so they can communicate in a scalable and non-blocking way. Mounting a NETCONF device The OpenDaylight component responsible to connect remote NETCONF devices is called the NETCONF southbound plugin aka the netconf-connector. Creating an instance of the netconf-connector will connect a NETCONF device. The NETCONF device will be seen as a mount point in the MD-SAL, exposing the device configuration and operational datastore and its capabilities. These mount points allow applications and remote users (over RESTCONF) to interact with the mounted devices. The netconf-connector currently supports the RFC-6241, RFC-5277 and RFC-6022. The following recipe will explain how to connect a NETCONF device to OpenDaylight. Getting ready This recipe requires a NETCONF device. If you don't have any, you can use the NETCONF test tool provided by OpenDaylight. It can be downloaded from the OpenDaylight Nexus repository: https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/netconf/netconf-testtool/1.0.4-Beryllium-SR4/netconf-testtool-1.0.4-Beryllium-SR4-executable.jar How to do it... Start OpenDaylight karaf distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an NETCONF device: opendaylight-user@root>feature:install odl-netconf-topology odl-restconf It might take a minute or so to complete the installation. Start your NETCONF device.If you want to use the NETCONF test tool, it is time to simulate a NETCONF device using the following command: $ java -jar netconf-testtool-1.0.1-Beryllium-SR4-executable.jar --device-count 1 This will simulate one device that will be bound to port 17830. Configure a new netconf-connectorSend the following request using RESTCONF:    Type: PUT    URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-deviceBy looking closer at the URL, you will notice that the last part is new-netconf-device. This must match the node-id that we will define in the payload. Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= Payload: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> </node> Let's have a closer look at this payload:    node-id: Defines the name of the netconf-connector.    address: Defines the IP address of the NETCONF device.    port: Defines the port for the NETCONF session.    username: Defines the username of the NETCONF session. This should be provided by the NETCONF device configuration.    password: Defines the password of the NETCONF session. As for the username, this should be provided by the NETCONF device configuration.    tcp-only: Defines whether or not the NETCONF session should use tcp or ssl. If set to true it will use tcp. This is the default configuration of the netconf-connector; it actually has more configurable elements that will be present in a second part. Once you have completed the request, send it. This will spawn a new netconf-connector that connects to the NETCONF device at the provided IP address and port using the provided credentials. Verify that the netconf-connector has correctly been pushed and get information about the connected NETCONF device.First, you could look at the log to see if any error occurred. If no error has occurred, you will see: 2016-05-07 11:37:42,470 | INFO | sing-executor-11 | NetconfDevice | 253 - org.opendaylight.netconf.sal-netconf-connector - 1.3.0.Beryllium | RemoteDevice{new-netconf-device}: Netconf connector initialized successfully Once the new netconf-connector is created, some useful metadata are written into the MD-SAL's operational datastore under the network-topology subtree. To retrieve this information, you should send the following request: Type: GET Headers: Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device We're using new-netconf-device as the node-id because this is the name we assigned to the netconf-connector in a previous step. This request will provide information about the connection status and device capabilities. The device capabilities are all the yang models the NETCONF device is providing in its hello-message that was used to create the schema context. More configuration for the netconf-connectorAs mentioned previously, the netconf-connector contains various configuration elements. Those fields are non-mandatory, with default values. If you do not wish to override any of these values, you shouldn't provide them. schema-cache-directory: This corresponds to the destination schema repository for yang files downloaded from the NETCONF device. By default, those schemas are saved in the cache directory ($ODL_ROOT/cache/schema). Using this configuration will define where to save the downloaded schema related to the cache directory. For instance, if you assigned new-schema-cache, schemas related to this device would be located under $ODL_ROOT/cache/new-schema-cache/. reconnect-on-changed-schema: If set to true, the connector will auto disconnect/reconnect when schemas are changed in the remote device. The netconf-connector will subscribe to base NETCONF notifications and listens for netconf-capability-change notification. Default value is false. connection-timeout-millis: Timeout in milliseconds after which the connection must be established. Default value is 20000 milliseconds.  default-request-timeout-millis: Timeout for blocking operations within transactions. Once this timer is reached, if the request is not yet finished, it will be canceled. Default value is 60000 milliseconds.  max-connection-attempts: Maximum number of connection attempts. Non-positive or null value is interpreted as infinity. Default value is 0, which means it will retry forever.  between-attempts-timeout-millis: Initial timeout in milliseconds between connection attempts. This will be multiplied by the sleep-factor for every new attempt. Default value is 2000 milliseconds.  sleep-factor: Back-off factor used to increase the delay between connection attempt(s). Default value is 1.5.  keepalive-delay: Netconf-connector sends keep alive RPCs while the session is idle to ensure session connectivity. This delay specifies the timeout between keep alive RPC in seconds. Providing a 0 value will disable this mechanism. Default value is 120 seconds. Using this configuration, your payload would look like this: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> <schema-cache-directory >new_netconf_device_cache</schema-cache-directory> <reconnect-on-changed-schema >false</reconnect-on-changed-schema> <connection-timeout-millis >20000</connection-timeout-millis> <default-request-timeout-millis >60000</default-request-timeout-millis> <max-connection-attempts >0</max-connection-attempts> <between-attempts-timeout-millis >2000</between-attempts-timeout-millis> <sleep-factor >1.5</sleep-factor> <keepalive-delay >120</keepalive-delay> </node> How it works... Once the request to connect a new NETCONF device is sent, OpenDaylight will setup the communication channel, used for managing, interacting with the device. At first, the remote NETCONF device will send its hello-message defining all of the capabilities it has. Based on this, the netconf-connector will download all the YANG files provided by the device. All those YANG files will define the schema context of the device. At the end of the process, some exposed capabilities might end up as unavailable, for two possible reasons: The NETCONF device provided a capability in its hello-message but hasn't provided the schema. ODL failed to mount a given schema due to YANG violation(s). OpenDaylight parses YANG models as per as the RFC 6020; if a schema is not respecting the RFC, it could end up as an unavailable-capability. If you encounter one of these situations, looking at the logs will pinpoint the reason for such a failure. There's more... Once the NETCONF device is connected, all its capabilities are available through the mount point. View it as a pass-through directly to the NETCONF device. Get datastore To see the data contained in the device datastore, use the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/ Adding yang-ext:mount/ to the URL will access the mount point created for new-netconf-device. This will show the configuration datastore. If you want to see the operational one, replace config by operational in the URL. If your device defines yang model, you can access its data using the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<container> The <module> represents a schema defining the <container>. The <container> can either be a list or a container. It is not possible to access a single leaf. You can access containers/lists within containers/lists. The last part of the URL would look like this: …/ yang-ext:mount/<module>:<container>/<sub-container> Invoke RPC In order to invoke an RPC on the remote device, you should use the following request: Type: POST Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<operation> This URL is accessing the mount point of new-netconf-device, and through this mount point we're accessing the <module> to call its <operation>. The <module> represents a schema defining the RPC and <operation> represents the RPC to call. Delete a netconf-connector Removing a netconf-connector will drop the NETCONF session and all resources will be cleaned. To perform such an operation, use the following request: Type: DELETE Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device By looking closer to the URL, you can see that we are removing the NETCONF node-id new-netconf-device. Browsing data models with Yang UI Yang UI is a user interface application through which one can navigate among all yang models available in the OpenDaylight controller. Not only does it aggregate all data models, it also enables their usage. Using this interface, you can create, remove, update, and delete any part of the model-driven datastore. It provides a nice, smooth user interface making it easier to browse through the model(s). This recipe will guide you through those functionalities. Getting ready This recipe only requires the OpenDaylight controller and a web-browser. How to do it... Start your OpenDaylight distribution using the karaf script. Using this client will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible to pull in all dependencies needed to use Yang UI: opendaylight-user@root>feature:install odl-dlux-yangui It might take a minute or so to complete the installation. Navigate to http://localhost:8181/index.html#/yangui/index.Username: admin Password: admin Once logged in, all modules will be loading until you can see this message at the bottom of the screen: Loading completed successfully You should see the API tab listing all yang models in the following format: <module-name> rev.<revision-date> For instance:     cluster-admin rev.2015-10-13     config rev.2013-04-05     credential-store rev.2015-02-26 By default, there isn't much you can do with the provided yang models. So, let's connect an OpenFlow switch to better understand how to use this Yang UI. Once done, refresh your web page to load newly added modules. Look for opendaylight-inventory rev.2013-08-19 and select the operational tab, as nothing will yet be in the config datastore. Then click on nodes and you'll see a request bar at the bottom of the page with multiple options.You can either copy the request to the clipboard to use it on your browser, send it, show a preview of it, or define a custom API request. For now, we will only send the request. You should see Request sent successfully and under this message should be the retrieved data. As we only have one switch connected, there is only one node. All the switch operational information is now printed on your screen. You could do the same request by specifying the node-id in the request. To do that you will need to expand nodes and click on node {id}, which will enable a more fine-grained search. How it works... OpenDaylight has a model-driven architecture, which means that all of its components are modeled using YANG. While installing features, OpenDaylight loads YANG models, making them available within the MD-SAL datastore. YangUI is a representation of this datastore. Each schema represents a subtree based on the name of the module and its revision-date. YangUI aggregates and parses all those models. It also acts as a REST client; through its web interface we can execute functions such as GET, POST, PUT, and DELETE. There's more… The example shown previously can be improved on, as there was no user yang model loaded. For instance, if you mount a NETCONF device containing its own yang model, you could interact with it through YangUI. You would use the config datastore to push/update some data, and you would see the operational datastore updated accordingly. In addition, accessing your data would be much easier than having to define the exact URL. See also Using API doc as a REST API client. Summary Throughout this article, we learned recipes such as connecting OpenFlow switches, mounting a NETCONF device, browsing data models with Yang UI. Resources for Article: Further resources on this subject: Introduction to SDN - Transformation from legacy to SDN [article] The OpenFlow Controllers [article] Introduction to SDN - Transformation from legacy to SDN [article]
Read more
  • 0
  • 0
  • 13251

article-image-innovation-communication-and-information-technologies
Packt
05 Jun 2013
12 min read
Save for later

Innovation of Communication and Information Technologies

Packt
05 Jun 2013
12 min read
(For more resources related to this topic, see here.) Communication is not something which we consciously think about in most situations. However, let me urge you to start observing how often you communicate daily with others and particularly the ways and means you do this. I think you can then agree that communication and targeted exchange of information are basic components and a foundation of our lives. Living creatures communicate with one another via some form of tool or other means. In the prehistoric era, humankind communicated information using images, characters, sounds, and later music from the sender to the receiver. However, the uniqueness of this information was not always clearly given and messages communicated by this means did not always reach the intended recipient. Looking into the past, the origin of cooperation was to create success, whether in career opportunities or social recognition to secure survival for his fellow men and himself. In the course of time and history, many possibilities derived from technical-evolutionary ideas have been developed for communication, cooperation, and exchange of information over many centuries. Ultimately it is the human drive to be successful and efficient in communicating, processing, and transmitting information that drives this process. Let us look back at the technological developments that have taken place at the end of the nineteenth century and the beginning of the twentieth century. At that time, our "modern" information exchange and communication by telegraphy (1837, Samuel Morse F. B.) or telephony (1876, Alexander Graham Bell) were in their infant stages. Telephones were innovative pieces of equipment that had to be integrated into companies or offices in order to process information more efficiently and rapidly without having to go through an interconnecting party. These developments were tools of communication which eventually seeded the development of today's modern communication— an integrated solution for communication and collaboration, in short, unified communications/unified messaging. Defining communication A communication tool is not simply a tool for communication but it supports us in our everyday life and it is also a "tool" in our professional dealings at work, to give us information and knowledge to help us make more informed decisions better and faster. This technological advancement affects not only our individual lives but also structures in which companies and even the global economy cooperate and collaborate. Communicating relevant, mission-critical information is an indispensable task in today's work environment as well as in our private lives. Success is in part based on how much knowledge we have, which itself ultimately depends on how we communicate and how fast we process information. In the past few decades, for both our professional and private lives, we communicated using media such as paper and pen, typewriter, and the phone. In the business environment, the use of paper to communicate with companies outside caused vast quantities of paper to be transported over long routes, which resulted in long waiting times and inefficient flow of information. Introduction of the early phone system helped to greatly increase efficiency but was subject to the limited availability of the intermediate connector to the other party and allowed only audio data to be transferred. The inability to transfer other forms of data such as documents, screenshots, or other contextual information is a limitation when you are only able to communicate by audio. This, however, does not mean that individuals or companies in the past were inefficient and unsuccessful because they were unable to communicate the way we are doing today. Being "successful" is a temporary condition, which once reached, does not automatically continue for a team or an individual. Success is the result of a consistent investment in the uniqueness of the company or of the individual, the use of performance-relevant core competencies, and the ability to learn faster than others and to change in a broader sense. The right strategies, visions, mission statements, organizational structures, competent managers, and especially employees are important for success. But correct and relevant information to make sound and informed decisions helps the organization to be continually successful. Thus, the critical role is to look into ways employees, departments, offices, and business partners communicate and collaborate with one another. Fundamentally, basic communication, as you can imagine, is the basis for all these forms of modern communication and collaboration to take place. In order to better understand current and future possibilities for communication and collaboration let me first take you back once again to the past. What changed the communication industry? In the past few decades, a technical change took place due to the changing conditions and requirements of information processing with the need to communicate and collaborate in our global economy. In addition to globalization, an important benefit for business was also created by countless technical developments. For example, the invention of the Internet was considered one of the biggest changes in the information community since the invention of the printing press. The initial networking of universities and research institutes which later spread into the commercial sector, and eventually to the private sector, had an unexpected impact on various areas of everyday life. In 1990, the Internet was given virtually free by the US National Science Foundation to the world as a communication network for various technology companies, research institutes, and universities to develop. The following diagram illustrates the development of telecommunication since the mid-eighteenth century and the innovations that were invented with software-based technology. In other words, traditional telecommunication- and software-based communication and collaboration technologies are coming closer together, merging and building a strong convergence for the future. Technological development of the Internet also created changing conditions in the market economy. Initially, these new possibilities on the part of many companies were more or less declared as "utopia" or only short-term achievement. The initial technology was available but insufficient to provide a real benefit for companies. However, in 1993, through the development of new communication protocols by Tim Berners-Lee (who is considered the inventor of the World Wide Web and the HTTP protocol) and CERN, there was a rapid boost of the Internet by increasing the efficient exchange of information. E-mail as a carrier of information in business and for private users was usable with these extensions and innovations of the Internet. On closer inspection, the exchange of information via e-mail probably created the first milestone for trends such as the "paperless office", to create savings in shipping and telephone costs. Understanding modern business communication needs Today, e-mail is not a trend but an established type of communication that is deeply integrated with our communication and business processes. Although "paperless office" and telephone cost savings using this technology have been realized, communication in companies significantly increased because you spend less time and less cost to transport information from point A to point B. What is the relationship between technologies, such as the Internet, e-mail, and the phone and the information processing and collaboration for businesses? The very same question is asked; how can we make cooperation within and outside companies more efficient? How can we communicate easier and quicker? Can companies achieve their goals more easily with these tools? Are there ways to avoid/reduce costs to increase savings? Can we make it easier to deal with this knowledge change? What potential gains and advantages exist here for the companies? What kind of changes will we need in the company and how will the changes affect the person who has to implement them? How can in-house projects be realized in order to improve the communication? To answer these questions, it is important to look at the communication trends in recent years. In the past five to ten years, with combined usage of previously developed and established milestones in communication technology, the Internet and the telephone, we were able to benefit from efficient client information while constantly developing communication technology. Wi-Fi, which has only existed for several years as a standard in companies and in public places such as airports, cafes, and so on, is now used by nearly all mobile communications devices. Wi-Fi allows us to communicate with words and images wirelessly to the Internet/local area network. Computer Telephony Integration (CTI), fax, and voicemail are some of the basic terms that play a special role in cooperation and information processing. The fact that almost every workplace has an Internet connection these days shows the need to simplify all communication possibilities for all users. This includes the Internet, e-mail, landline phones, mobile phones, video conferencing equipment, tablets, PCs, netbooks, and smartphones. It is important to highlight that connectivity to the Internet and telecommunication services is still a challenge in some areas of the world and even a luxury for some developed countries and regions. Even in these developed places we will find some remote locations with limited (slow) Internet connections. More often than not, new software promising better benefits in communication and cooperation tends to overwhelm employees in the company. Year after year, companies have invested over and over again in new technologies trying to get a competitive edge over other companies, by improving their internal processes and procedures through more efficient communication methods or technologies. In the past few years, the number and complexity of technologies and processes has escalated so much that these developments and investments are showing signs of having a negative impact on the efficiency and effectiveness of the company. Also, many business owners believed that pure investment in new tools, new software programs, and new communication equipment is the sole solution for better structural communication. This circumstance is still one of the top challenges and problems for IT and change processes in organizations. Since the 1990s, according to international studies (such as the Federal Reserve Board of Governors) a large percentage of the available budget is used for communications and information technologies. The chart shows that companies are continuously upgrading and investing in their technology. Even though the study begun in the last century, it is a fact that this same development has progressed to today. Through the investments made, it is obvious that IT investments are an above-average priority for many companies. The dotted line "Actual real IT investment" shows the IT investments realized in the North American region, the solid line "Target real IT investment" shows, on the other hand, the "target" IT investment is based on financial planning and forecasting of organizations. In the other chapters of this book, we will specifically focus on the "pure" IT investments (which are the actual IT investments). Through more expensive investments in information and communication technology, we will be able to see clearly that we need more than a wealth of different complex technologies to communicate and collaborate amongst employees, customers, partner companies, and so on. Precisely for this reason, software and technology companies that were developed a few years ago started implementing solutions in this field of communications so that technology "should unify the main day-to-day communication tools". Evolution of communication tools The old wired phone of the past has evolved over the years to include more uses and functionalities, and has been transformed into today's phone, which is effectively a mobile communication link. A telephone using its own PBX (Private Branch Exchange = Telephony system) in the company is very different from a modern mobile phone with the integrated mobile phone operator services. Global companies led the integration of different technologies to improve communication. Internal studies and analyses that were conducted showed that the average employee uses many devices such as a PC, a work phone (landline), possibly several mobile phones, a tablet, a fax, a notebook, and/or a Netbook. Perhaps you can still remember the "Pager", which was expensive when released but has long since been replaced by innovations from the mobile industry (excluding a few regions and certain professional areas such as the hospitals that are still using pagers for urgent communication). Of course, such studies on consumer communication are not solely used to find out the number of devices per user, but other important data such as the frequency and intensity of usage on the various devices so that companies can invest in the appropriate technology and the staff to facilitate communication and collaboration with others. Such studies revealed high costs for the workplace equipment and loss of efficiency was caused by an overlap and a "flood" of technologies. Due to this confusion of numerous devices and their associated communication chaos, many companies have invested increasingly in the so-called unified messaging solutions. The focus of such solutions is to provide individual employees a standardized tool to carry out their job functions more efficiently. This represents a portion of unified messaging known as CTI, as mentioned earlier. CTI solutions are a specialized kind of software which is used for the integration of workplace phones into workstation software solutions such as Microsoft Office and Exchange, Lotus Notes, Novell GroupWise, or others. A use case scenario could be a PC with Microsoft Outlook installed, where an information worker can click to call from an e-mail or from the address book and also use basic CTI features such as to put the call into a conference call, put a call on hold, or forward and hang up a call. The goal of unified messaging solutions is to take the load of the various complex technologies away so that we could communicate rapidly and efficiently manage human resources within the business. CTI-solutions linked to telecommunications with the electronic data processing allows functionalities such as adoption, termination, and the automatic dial-up telephone calls from a personal computer (PC) to be possible. Fax and voicemail are also part of unified messaging. Electronic fax can be dispatched and received from any workstation. Voicemails can take and playback voice messages using unified messaging applications. By using these solutions there is not only comprehensive efficiency but also cost saving without investing in more new telephone systems, add-ons (plugins), proprietary software, fax machines, telephones, and more. However, the integration solved the problems of unified messaging applications and technologies only partially because there are more and more new communication possibilities existing within the company, on the Internet with additional phone numbers, e-mail addresses, and various web communication options.
Read more
  • 0
  • 0
  • 12396
article-image-troubleshooting-openvpn-2-configurations
Packt
21 Feb 2011
10 min read
Save for later

Troubleshooting OpenVPN 2: Configurations

Packt
21 Feb 2011
10 min read
OpenVPN 2 Cookbook 100 simple and incredibly effective recipes for harnessing the power of the OpenVPN 2 network Set of recipes covering the whole range of tasks for working with OpenVPN The quickest way to solve your OpenVPN problems! Set up, configure, troubleshoot and tune OpenVPN Uncover advanced features of OpenVPN and even some undocumented options Introduction The topic of this article is troubleshooting OpenVPN. This article will focus on troubleshooting OpenVPN misconfigurations. The recipes in this article will therefore deal first with breaking the things. We will then provide the tools on how to find and solve the configuration errors. Some of the configuration directives used in this article have not been demonstrated before, so even if you are not interested in breaking things this article will still be insightful. Cipher mismatches In this recipe, we will change the cryptographic ciphers that OpenVPN uses. Initially, we will change the cipher only on the client side, which will cause the initialization of the VPN connection to fail. The primary purpose of this recipe is to show the error messages that appear, not to explore the different types of ciphers that OpenVPN supports. Getting ready Install OpenVPN 2.0 or higher on two computers. Make sure the computers are connected over a network. Set up the client and server certificates. For this recipe, the server computer was running CentOS 5 Linux and OpenVPN 2.1.1. The client was running Fedora 13 Linux and OpenVPN 2.1.1. Keep the server configuration file basic-udp-server.conf (download code, ch:2) and the client configuration file basic-udp-client.conf at hand. How to do it... Start the server using the configuration file basic-udp-server.conf: [root@server]# openvpn --config basic-udp-server.conf Next, create the client configuration file by appending a line to the basic-udp-client.conf file: cipher CAST5-CBC Save it as example7-1-client.conf. Start the client, after which the following message will appear in the client log: [root@client]# openvpn --config example7-1-client.conf ... WARNING: 'cipher' is used inconsistently, local='cipher CAST5- CBC', remote='cipher BF-CBC' ... [openvpnserver] Peer Connection Initiated with server-ip:1194 ... TUN/TAP device tun0 opened ... /sbin/ip link set dev tun0 up mtu 1500 ... /sbin/ip addr add dev tun0 192.168.200.2/24 broadcast 192.168.200.255 ... Initialization Sequence Completed ... Authenticate/Decrypt packet error: cipher final failed And, similarly, on the server side: ... client-ip:52461 WARNING: 'cipher' is used inconsistently, local='cipher BF-CBC', remote='cipher CAST5-CBC' ... client-ip:52461 [openvpnclient1] Peer Connection Initiated with openvpnclient1:52461 ... openvpnclient1/client-ip:52461 Authenticate/Decrypt packet error: cipher final failed ... openvpnclient1/client-ip:52461 Authenticate/Decrypt packet error: cipher final failed The connection will not be successfully established, but it will also not be disconnected immediately. How it works... During the connection phase, the client and the server negotiate several parameters needed to secure the connection. One of the most important parameters in this phase is the encryption cipher, which is used to encrypt and decrypt all the messages. If the client and server are using different ciphers, then they are simply not capable of talking to each other. By adding the following configuration directive to the server configuration file, the client and the server can communicate again: cipher CAST5-CBC There's more... OpenVPN supports quite a few ciphers, although support for some of the ciphers is still experimental. To view the list of supported ciphers, type: $ openvpn --show-ciphers This will list all ciphers with both variables and fixed cipher length. The ciphers with variable cipher length are very well supported by OpenVPN, the others can sometimes lead to unpredictable results. TUN versus TAP mismatches A common mistake when setting up a VPN based on OpenVPN is the type of adapter that is used. If the server is configured to use a TUN-style network but a client is configured to use a TAP-style interface, then the VPN connection will fail. In this recipe, we will show what is typically seen when this common configuration error is made. Getting ready Install OpenVPN 2.0 or higher on two computers. Make sure the computers are connected over a network. Set up the client and server certificates (Download code-ch:2 here). For this recipe, the server computer was running CentOS 5 Linux and OpenVPN 2.1.1. The client was running Fedora 13 Linux and OpenVPN 2.1.1. Keep the server configuration file basic-udp-server.conf (Download code-ch:2 here) and the client configuration file basic-udp-client.confat hand. How to do it... Start the server using the configuration file basic-udp-server.conf: [root@server]# openvpn --config basic-udp-server.conf Next, create the client configuration: client proto udp remote openvpnserver.example.com port 1194 dev tap nobind ca /etc/openvpn/cookbook/ca.crt cert /etc/openvpn/cookbook/client1.crt key /etc/openvpn/cookbook/client1.key tls-auth /etc/openvpn/cookbook/ta.key 1 ns-cert-type server Save it as example7-2-client.conf. Start the client [root@client]# openvpn --config example7-2-client.conf The client log will show: ... WARNING: 'dev-type' is used inconsistently, local='dev-type tap', remote='dev-type tun' ... WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1573', remote='link-mtu 1541' ... WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1532', remote='tun-mtu 1500' ... [openvpnserver] Peer Connection Initiated with server-ip:1194 ... TUN/TAP device tap0 opened ... /sbin/ip link set dev tap0 up mtu 1500 ... /sbin/ip addr add dev tap0 192.168.200.2/24 broadcast 192.168.200.255 ... Initialization Sequence Completed At this point, you can try pinging the server, but it will respond with an error: [client]$ ping 192.168.200.1 PING 192.168.200.1 (192.168.200.1) 56(84) bytes of data. From 192.168.200.2 icmp_seq=2 Destination Host Unreachable From 192.168.200.2 icmp_seq=3 Destination Host Unreachable From 192.168.200.2 icmp_seq=4 Destination Host Unreachable How it works... A TUN-style interface offers a point-to-point connection over which only TCP/IP traffic can be tunneled. A TAP-style interface offers the equivalent of an Ethernet interface that includes extra headers. This allows a user to tunnel other types of traffic over the interface. When the client and the server are misconfigured, the expected packet size is different: ... WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1532', remote='tun-mtu 1500' This shows that each packet that is sent through a TAP-style interface is 32 bytes larger than the packets sent through a TUN-style interface. By correcting the client configuration, this problem is resolved. Compression mismatches OpenVPN supports on-the-fly compression of the traffic that is sent over the VPN tunnel. This can improve the performance over a slow network line, but it does add a little overhead. When transferring uncompressible data (such as ZIP files), the performance actually decreases slightly. If the compression is enabled on the server but not on the client, then the VPN connection will fail. Getting ready Install OpenVPN 2.0 or higher on two computers. Make sure the computers are connected over a network. Set up the client and server certificates. For this recipe, the server computer was running CentOS 5 Linux and OpenVPN 2.1.1. The client was running Fedora 13 Linux and OpenVPN 2.1.1. Keep the server configuration file basic-udp-server.conf (Download code-ch:2 here) and the client configuration file basic-udp-client.confat hand.. How to do it... Append a line to the server configuration file basic-udp-server.conf: comp-lzo Save it as example7-3-server.conf. Start the server: [root@server]# openvpn --config example7-3-server.conf Next, start the client: [root@client]# openvpn --config basic-udp-client.conf The connection will initiate but when data is sent over the VPN connection, the following messages will appear: Initialization Sequence Completed ... write to TUN/TAP : Invalid argument (code=22) ... write to TUN/TAP : Invalid argument (code=22) How it works... During the connection phase, no compression is used to transfer information between the client and the server. One of the parameters that is negotiated is the use of compression for the actual VPN payload. If there is a configuration mismatch between the client and the server, then both the sides will get confused by the traffic that the other side is sending. With a network fully comprising OpenVPN 2.1 clients and an OpenVPN 2.1 server, this can be fixed for all the clients by just adding another line: push "comp-lzo" There's more... OpenVPN 2.0 did not have the ability to push compression directives to the clients. This means that an OpenVPN 2.0 server does not understand this directive, nor do OpenVPN 2.0 clients. So, if an OpenVPN 2.1 server pushes out this directive to an OpenVPN 2.0 client, the connection will fail. Key mismatches OpenVPN offers extra protection for its TLS control channel in the form of HMAC keys. These keys are exactly the same as the static "secret" keys used in point-to-point style networks. For multi-client style networks, this extra protection can be enabled using the tls-auth directive . If there is a mismatch between the client and the server related to this tls-auth key , then the VPN connection will fail to get initialized. Getting ready Install OpenVPN 2.0 or higher on two computers. Make sure the computers are connected over a network. Set up the client and server certificates using the first recipe. For this recipe, the server computer was running CentOS 5 Linux and OpenVPN 2.1.1. The client was running Fedora 13 Linux and OpenVPN 2.1.1. Keep the server configuration file basic-udp-server.conf (Download code-ch:2 here) and the client configuration file basic-udp-client.conf at hand. How to do it... Start the server using the configuration file basic-udp-server.conf: [root@server]# openvpn --config basic-udp-server.conf Next, create the client configuration: client proto udp remote openvpnserver port 1194 dev tun nobind ca /etc/openvpn/cookbook/ca.crt cert /etc/openvpn/cookbook/client1.crt key /etc/openvpn/cookbook/client1.key tls-auth /etc/openvpn/cookbook/ta.key ns-cert-type server Note the lack of the second parameter for tls-auth. Save it as example7-4-client.conf file. Start the client: [root@client]# openvpn --config example7-4-client.conf The client log will show no errors, but the connection will not be established either. In the server log we'll find: ... Initialization Sequence Completed ... Authenticate/Decrypt packet error: packet HMAC authentication failed ... TLS Error: incoming packet authentication failed from client- ip:54454 This shows that the client openvpnclient1 is connecting using the wrong tls-auth parameter and the connection is refused. How it works... At the very first phase of the connection initialization, the client and the server verify each other's HMAC keys. If an HMAC key is not configured correctly, then the initialization is aborted and the connection will fail to establish. As the OpenVPN server is not able to determine whether the client is simply misconfigured or whether a malicious client is trying to overload the server, the connection is simply dropped. This causes the client to keep listening for the traffic from the server, until it eventually times out. In this recipe, the misconfiguration consisted of the missing parameter 1 behind: tls-auth /etc/openvpn/cookbook/ta.key The second parameter to the tls-auth directive is the direction of the key. Normally, the following convention is used: 0: from server to client 1: from client to server This parameter causes OpenVPN to derive its HMAC keys from a different part of the ta.key file. If the client and server disagree on which parts the HMAC keys are derived from, the connection cannot be established. Similarly, when the client and server are deriving the HMAC keys from different ta.key files, the connection can also not be established.
Read more
  • 0
  • 0
  • 11517

article-image-network-programming-gawk
Pavan Ramchandani
31 May 2018
12 min read
Save for later

Network programming 101 with GAWK (GNU AWK)

Pavan Ramchandani
31 May 2018
12 min read
In today's tutorial, we will learn about the networking aspects, for example working with TCP/IP for both client-side and server-side. We will also explore HTTP services to help you get going with networking in AWK. This tutorial is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. The AWK programming language was developed as a pattern-matching language for text manipulation; however, GAWK has advanced features, such as file-like handling of network connections. We can perform simple TCP/IP connection handling in GAWK with the help of special filenames. GAWK extends the two-way I/O mechanism used with the |& operator to simple networking using these special filenames that hide the complex details of socket programming to the programmer. The special filename for network communication is made up of multiple fields, all of which are mandatory. The following is the syntax of creating a filename for network communication: /net-type/protocol/local-port/remote-host/remote-port Each field is separated from another with a forward slash. Specifying all of the fields is mandatory. If any of the field is not valid for any protocol or you want the system to pick a default value for that field, it is set as 0. The following list illustrates the meaning of different fields used in creating the file for network communication: net-type: Its value is inet4 for IPv4, inet6 for IPv6, or inet to use the system default (which is generally IPv4). protocol: It is either tcp or udp for a TCP or UDP IP connection. It is advised you use the TCP protocol for networking. UDP is used when low overhead is a priority. local-port: Its value decides which port on the local machine is used for communication with the remote system. On the client side, its value is generally set to 0 to indicate any free port to be picked up by the system itself. On the server side, its value is other than 0 because the service is provided to a specific publicly known port number or service name, such as http, smtp, and so on. remote-host: It is the remote hostname which is to be at the other end of the connection. For the server side, its value is set to 0 to indicate the server is open for all other hosts for connection. For the client side, its value is fixed to one remote host and hence, it is always different from 0. This name can either be represented through symbols, such as www.google.com, or numbers, 123.45.67.89. remote-port: It is the port on which the remote machine will communicate across the network. For clients, its value is other than 0, to indicate to which port they are connecting to the remote machine. For servers, its value is the port on which they want connection from the client to be established. We can use a service name here such as ftp, http, or a port number such as 80, 21, and so on. TCP client and server (/inet/tcp) TCP gaurantees that data is received at the other end and in the same order as it was transmitted, so always use TCP. In the following example, we will create a tcp-server (sender) to send the current date time of the server to the client. The server uses the strftime() function with the coprocess operator to send to the GAWK server, listening on the 8080 port. The remote host and remote port could be any client, so its value is kept as 0. The server connection is closed by passing the special filename to the close() function for closing the file as follows: $ vi tcpserver.awk #TCP-Server BEGIN { print strftime() |& "/inet/tcp/8080/0/0" close("/inet/tcp/8080/0/0") } Now, open one Terminal and run this program before running the client program as follows: $ awk -f tcpserver.awk Next, we create the tcpclient (receiver) to receive the data sent by the tcpserver. Here, we first create the client connection and pass the received data to the getline() using the coprocess operator. Here the local-port value is set to 0 to be automatically chosen by the system, the remote-host is set to the localhost, and the remote-port is set to the tcp-server port, 8080. After that, the received message is printed, using the print $0 command, and finally, the client connection is closed using the close command, as follows: $ vi tcpclient.awk #TCP-client BEGIN { "/inet/tcp/0/localhost/8080" |& getline print $0 close("/inet/tcp/0/localhost/8080") } Now, execute the tcpclient program in another Terminal as follows : $ awk -f tcpclient.awk The output of the previous code is as follows : Fri Feb 9 09:42:22 IST 2018 UDP client and server ( /inet/udp ) The server and client programs that use the UDP protocol for communication are almost identical to their TCP counterparts, with the only difference being that the protocol is changed to udp from tcp. So, the UDP-server and UDP-client program can be written as follows: $ vi udpserver.awk #UDP-Server BEGIN { print strftime() |& "/inet/udp/8080/0/0" "/inet/udp/8080/0/0" |& getline print $0 close("/inet/udp/8080/0/0") } $ awk -f udpserver.awk Here, only one addition has been made to the client program. In the client, we send the message hello from client ! to the server. So when we execute this program on the receiving Terminal, where the udpclient.awk program is run, we get the remote system date time. And on the Terminal where the udpserver.awk program is run, we get the hello message from the client: $ vi udpclient.awk #UDP-client BEGIN { print "hello from client!" |& "/inet/udp/0/localhost/8080" "/inet/udp/0/localhost/8080" |& getline print $0 close("/inet/udp/0/localhost/8080") } $ awk -f udpclient.awk GAWK can be used to open direct sockets only. Currently, there is no way to access services available over an SSL connection such as https, smtps, pop3s, imaps, and so on. Reading a web page using HttpService To read a web page, we use the Hypertext Transfer Protocol (HTTP ) service which runs on port number 80. First, we redefine the record separators RS and ORS because HTTP requires CR-LF to separate lines. The program requests to the IP address 35.164.82.168 ( www.grymoire.com ) of a static website which, in turn, makes a GET request to the web page: http://35.164.82.168/Unix/donate.html . HTTP calls the GET request, a method which tells the web server to transmit the web page donate.html. The output is stored in the getline function using the co-process operator and printed on the screen, line by line, using the while loop. Finally, we close the http service connection. The following is the program to retrieve the web page: $ vi view_webpage.awk BEGIN { RS=ORS="rn" http = "/inet/tcp/0/35.164.82.168/80" print "GET http://35.164.82.168/Unix/donate.html" |& http while ((http |& getline) > 0) print $0 close(http) } $ awk -f view_webpage.awk Upon executing the program, it fills the screen with the source code of the page on the screen as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML lang="en-US"> <HEAD> <TITLE> Welcome to The UNIX Grymoire!</TITLE> <meta name="keywords" content="grymoire, donate, unix, tutorials, sed, awk"> <META NAME="Description" CONTENT="Please donate to the Unix Grymoire" > <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <link href="myCSS.css" rel="stylesheet" type="text/css"> <!-- Place this tag in your head or just before your close body tag --> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <link rel="canonical" href="http://www.grymoire.com/Unix/donate.html"> <link href="myCSS.css" rel="stylesheet" type="text/css"> ........ ........ Profiling in GAWK Profiling of code is done for code optimization. In GAWK, we can do profiling by supplying a profile option to GAWK while running the GAWK program. On execution of the GAWK program with that option, it creates a file with the name awkprof.out. Since GAWK is performing profiling of the code, the program execution is up to 45% slower than the speed at which GAWK normally executes. Let's understand profiling by looking at some examples. In the following example, we create a program that has four functions; two arithmetic functions, one function prints an array, and one function calls all of them. Our program also contains two BEGIN and two END statements. First, the BEGIN and END statement and then it contains a pattern action rule, then the second BEGIN and END statement, as follows: $ vi codeprof.awk func z_array(){ arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for ( v in arr ) print v, arr[v] print "Array Ends...!" print "=====================" } function mul(num1, num2){ result = num1 * num2 printf ("Multiplication of %d * %d : %dn", num1,num2,result) } function all(){ add(30,10) mul(5,6) z_array() } BEGIN { print "First BEGIN statement" print "=====================" } END { print "First END statement " print "=====================" } /maruti/{print $0 } BEGIN { print "Second BEGIN statement" print "=====================" all() } END { print "Second END statement" print "=====================" } function add(num1, num2){ result = num1 + num2 printf ("Addition of %d + %d : %dn", num1,num2,result) } $ awk -- prof -f codeprof.awk cars.dat The output of the previous code is as follows: First BEGIN statement ===================== Second BEGIN statement ===================== Addition of 30 + 10 : 40 Multiplication of 5 * 6 : 30 Array begins...! ===================== 1 audi 2 bmw 3 ferrari 4 toyota 5 volvo Array Ends...! ===================== maruti swift 2007 50000 5 maruti dezire 2009 3100 6 maruti swift 2009 4100 5 maruti esteem 1997 98000 1 First END statement ===================== Second END statement ===================== Execution of the previous program also creates a file with the name awkprof.out. If we want to create this profile file with a custom name, then we can specify the filename as an argument to the --profile option as follows: $ awk --prof=codeprof.prof -f codeprof.awk cars.dat Now, upon execution of the preceding code we get a new file with the name codeprof.prof. Let's try to understand the contents of the file codeprof.prof created by the profiles as follows: # gawk profile, created Fri Feb 9 11:01:41 2018 # BEGIN rule(s) BEGIN { 1 print "First BEGIN statement" 1 print "=====================" } BEGIN { 1 print "Second BEGIN statement" 1 print "=====================" 1 all() } # Rule(s) 12 /maruti/ { # 4 4 print $0 } # END rule(s) END { 1 print "First END statement " 1 print "=====================" } END { 1 print "Second END statement" 1 print "=====================" } # Functions, listed alphabetically 1 function add(num1, num2) { 1 result = num1 + num2 1 printf "Addition of %d + %d : %dn", num1, num2, result } 1 function all() { 1 add(30, 10) 1 mul(5, 6) 1 z_array() } 1 function mul(num1, num2) { 1 result = num1 * num2 1 printf "Multiplication of %d * %d : %dn", num1, num2, result } 1 function z_array() { 1 arr[30] = "volvo" 1 arr[10] = "bmw" 1 arr[20] = "audi" 1 arr[50] = "toyota" 1 arr["car"] = "ferrari" 1 n = asort(arr) 1 print "Array begins...!" 1 print "=====================" 5 for (v in arr) { 5 print v, arr[v] } 1 print "Array Ends...!" 1 print "=====================" } This profiling example explains the various basic features of profiling in GAWK. They are as follows: The first look at the file from top to bottom explains the order of the program in which various rules are executed. First, the BEGIN rules are listed followed by the BEGINFILE rule, if any. Then pattern-action rules are listed. Thereafter, ENDFILE rules and END rules are printed. Finally, functions are listed in alphabetical order. Multiple BEGIN and END rules retain their places as separate identities. The same is also true for the BEGINFILE and ENDFILE rules. The pattern-action rules have two counts. The first number, to the left of the rule, tells how many times the rule's pattern was tested for the input file/record. The second number, to the right of the rule's opening left brace, with a comment, shows how many times the rule's action was executed when the rule evaluated to true. The difference between the two indicates how many times the rules pattern evaluated to false. If there is an if-else statement then the number shows how many times the condition was tested. At the right of the opening left brace for its body is a count showing how many times the condition was true. The count for the else statement tells how many times the test failed.  The count at the beginning of a loop header (for or while loop) shows how many times the loop conditional-expression was executed. In user-defined functions, the count before the function keyword tells how many times the function was called. The counts next to the statements in the body show how many times those statements were executed. The layout of each block uses C-style tabs for code alignment. Braces are used to mark the opening and closing of a code block, similar to C-style. Parentheses are used as per the precedence rule and the structure of the program, but only when needed. Printf or print statement arguments are enclosed in parentheses, only if the statement is followed by redirection. GAWK also gives leading comments before rules, such as before BEGIN and END rules, BEGINFILE and ENDFILE rules, and pattern-action rules and before functions. GAWK provides standard representation in a profiled version of the program. GAWK also accepts another option, --pretty-print. The following is an example of a pretty-printing AWK program: $ awk --pretty-print -f codeprof.awk cars.dat When GAWK is called with pretty-print, the program generates awkprof.out, but this time without any execution counts in the output. Pretty-print output also preserves any original comments if they are given in a program while the profile option omits the original program’s comments. The file created on execution of the program with --pretty-print option is as follows: # gawk profile, created Fri Feb 9 11:04:19 2018 # BEGIN rule(s) BEGIN { print "First BEGIN statement" print "=====================" } BEGIN { print "Second BEGIN statement" print "=====================" all() } # Rule(s) /maruti/ { print $0 } # END rule(s) END { print "First END statement " print "=====================" } END { print "Second END statement" print "=====================" } # Functions, listed alphabetically function add(num1, num2) { result = num1 + num2 printf "Addition of %d + %d : %dn", num1, num2, result } function all() { add(30, 10) mul(5, 6) z_array() } function mul(num1, num2) { result = num1 * num2 printf "Multiplication of %d * %d : %dn", num1, num2, result } function z_array() { arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for (v in arr) { print v, arr[v] } print "Array Ends...!" print "=====================" } To summarize, we looked at the basics of network programming and GAWK's built-in command line debugger. Do check out the book Learning AWK Programming to know more about the intricacies of AWK programming for text processing. 20 ways to describe programming in 5 words What is Mob Programming?
Read more
  • 0
  • 0
  • 11509

article-image-windows-powershell-desired-state-configuration-video
Fatema Patrawala
16 Jul 2018
1 min read
Save for later

Scripting with Windows Powershell Desired State Configuration [Video]

Fatema Patrawala
16 Jul 2018
1 min read
https://www.youtube.com/watch?v=H3jqgto5Rk8&list=PLTgRMOcmRb3OpgM9tsUjuI3MgLCHDJ3oM&index=4 What is Desired State Configuration? Powershell Desired State Configuration (DSC) is really a powerful way of scripting. It is a declarative model of scripting, instead of you defining Powershell exactly each and every step to get from point A to point B. You only need to describe what point B is and Powershell takes care of it before anything. The biggest benefit is that we get to define our configuration, our infrastructures, our servers as a code. Desired State Configuration in Powershell can really be achieved through 3 simple steps: Create the Configuration Compile the Configuration into a MoF file Deploy the Configuration What will you need to run Powershell DSC? Thankfully we do not need a whole lot, Powershell comes with it built-in. So, for managing Windows systems with DSC you are going to need modern version of Powershell, that is: Windows 4.0, 5.0, 5.1 Powershell DSC for Linux is available Currently limited support for Powershell Core Exploring Windows PowerShell 5.0 Introducing PowerShell Remoting Managing Nano Server with Windows PowerShell and Windows PowerShell DSC    
Read more
  • 0
  • 0
  • 11075
article-image-initial-configuration-sco-2016
Packt
17 Jul 2017
13 min read
Save for later

Initial Configuration of SCO 2016

Packt
17 Jul 2017
13 min read
In this article by Michael Seidl, author of the book Microsoft System Center 2016 Orchestrator Cookbook - Second Edition, will show you how to setup Orchestrator Environment and how to deploy and configure Orchestrator Integration Packs. (For more resources related to this topic, see here.) Deploying an additional Runbook designer Runbook designer is the key feature to build your Runbooks. After the initial installation, Runbook designer is installed on the server. For your daily work with orchestrator and Runbooks, you would like to install the Runbook designer on your client or on admin server. We will go through these steps in this recipe. Getting ready You must review the planning the Orchestrator deployment recipe before performing the steps in this recipe. There are a number of dependencies in the planning recipe you must perform in order to successfully complete the tasks in this recipe. You must install a management server before you can install the additional Runbook Designers. The user account performing the installation has administrative privileges on the server nominated for the SCO deployment and must also be a member of OrchestratorUsersGroup or equivalent rights. The example deployment in this recipe is based on the following configuration details: Management server called TLSCO01 with a remote database is already installed System Center 2016 Orchestrator How to do it... The Runbook designer is used to build Runbooks using standard activities and or integration pack activities. The designer can be installed on either a server class operating system or a client class operating system. Follow these steps to deploy an additional Runbook Designer using the deployment manager: Install a supported operating system and join the active directory domain in scope of the SCO deployment. In this recipe the operating system is Windows 10. Ensure you configure the allowed ports and services if the local firewall is enabled for the domain profile. See the following link for details: https://technet.microsoft.com/en-us/library/hh420382(v=sc.12).aspx. Log in to the SCO Management server with a user account with SCO administrative rights. Launch System Center 2016 Orchestrator Deployment Manager: Right-click on Runbook designers, and select Deploy new Runbook Designer: Click on Next on the welcome page. Type the computer name in the Computer field and click on Add. Click on Next. On the Deploy Integration Packs or Hotfixes page check all the integration packs required by the user of the Runbook designer (for this example we will select the AD IP). Click on Next. Click on Finish to begin the installation using the Deployment Manager. How it works... The Deployment Manager is a great option for scaling out your Runbook Servers and also for distributing the Runbook Designer without the need for the installation media. In both cases the Deployment Manager connects to the Management Server and the database server to configure the necessary settings. On the target system the deployment manager installs the required binaries and optionally deploys the integration packs selected. Using the Deployment Manager provides a consistent and coordinated approach to scaling out the components of a SCO deployment. See also The following official web link is a great source of the most up to date information on SCO: https://docs.microsoft.com/en-us/system-center/orchestrator/ Registering an SCO Integration Pack Microsoft System Center 2016 Orchestrator (SCO) automation is driven by process automation components. These process automation components are similar in concept to a physical toolbox. In a toolbox you typically have different types of tools which enable you to build what you desire. In the context of SCO these tools are known as Activities. Activities fall into two main categories: Built-in Standard Activities: These are the default activity categories available to you in the Runbook Designer. The standard activities on their own provide you with a set of components to create very powerful Runbooks. Integration Pack Activities: Integration Pack Activities are provided either by Microsoft, the community, solution integration organizations, or are custom created by using the Orchestrator Integration Pack Toolkit. These activities provide you with the Runbook components to interface with the target environment of the IP. For example, the Active Directory IP has the activities you can perform in the target Active Directory environment. This recipe provides the steps to find and register the second type of activities into your default implementation of SCO. Getting ready You must download the Integration Pack(s) you plan to deploy from the provider of the IP. In this example we will be deploying the Active Directory IP, which can be found at the following link: https://www.microsoft.com/en-us/download/details.aspx?id=54098. You must have deployed a System Center 2016 Orchestrator environment and have full administrative rights in the environment. How to do it... The following diagram provides a visual summary and order of the tasks you need to perform to complete this recipe: We will deploy the Microsoft Active Directory (AD) integration pack (IP). Integration pack organization A good practice is to create a folder structure for your integration packs. The folders should reflect versions of the IPs for logical grouping and management. The version of the IP will be visible in the console and as such you must perform this step after you have performed the step to load the IP(s). This approach will aid in change management when updating IPs in multiple environments. Follow these steps to deploy the Active Directory integration pack. Identify the source location for the Integration Pack in scope (for example, the AD IP for SCO2016). Download the IP to a local directory on the Management Server or UNC share. Log in to the SCO Management server. Launch the Deployment Manager: Under Orchestrator Management Server, right-click on Integration Packs. Select Register IP with the Orchestrator Management Server: Click on Next on the welcome page. Click on Add on the Select Integration Packs or Hotfixes page. Navigate to the directory where the target IP is located, click on Open, and then click on Next. Click on Finish . Click on Accept on End-User License Agreement to complete the registration. Click on Refresh to validate if the IP has successfully been registered. How it works... The process of loading an integration pack is simple. The prerequisite for successfully registering the IP (loading) is ensuring you have downloaded a supported IP to a location accessible to the SCO management server. Additionally the person performing the registration must be a SCO administrator. At this point we have registered the Integration Pack to our Deployment Wizard, 2 Steps are still necessary before we can use the Integration Pack, see our following Recipe for this. There's more... Registering the IP is the first part of the process of making the IP activities available to Runbook designers and Runbook Servers. The next Step has to be the Deployment of Integration Packs to Runbook Designer. See the next Recipe for that. Orchestrator Integration Packs are provided not only by Microsoft, also third party Companies like Cisco or NetAPP are providing OIP’s for their Products. Additionally there is a huge Community which are providing Orchestrator Integration Packs. There are several Sources of downloading Integration Packs, here are two useful links: http://www.techguy.at/liste-mit-integration-packs-fuer-system-center-orchestrator/ http://scorch.codeplex.com/ https://www.microsoft.com/en-us/download/details.aspx?id=54098 Deploying the IP to designers and Runbook servers Registering the Orchestrator Integration Pack is only the first step, you also need to deploy the OIP to your Designer or Runbook Server. Getting Ready You have to follow the steps described in Recipe Registering an SCO Integration Pack before you can start with the next steps to deploy an OIP. How to do it In our example we will deploy the Active Direcgtory Integration Pack to our Runbooks Desginer. Follow these steps to deploy the Active Directory integration pack. Once the IP in scope (AD IP in our example) has successfully been registered, follow these steps to deploy it to the Runbook Designers and Runbook Servers. Log in to the SCO Management server and launch Deployment Manager: Under Orchestrator Management Server, right-click on the Integration Pack in scope and select Deploy IP to Runbook Server or Runbook Designer: Click on Next on the welcome page, select the IP you would like to deploy (in our example, System Center Integration Pack for Active Directory ,  and then click on Next. On the computer Selection page. Type the name of the Runbook Server or designer  in scope and click on Add (repeat for all servers in the scope).  On the Installation Options page you have the following three options: Schedule the Installation: select this option if you want to schedule the deployment for a specific time. You still have to select one of the next two options. Stop all running Runbooks before installing the Integration Packs  or Hotfixes: This option will as described stop all current Runbooks in the environment. Install the Integration Packs or Hotfixes without stopping the running Runbooks: This is the preferred option if you want to have a controlled deployment without impacting current jobs: Click on Next after making your installation option selection. Click on Finish The integration pack will be deployed to all selected designers and Runbook servers. You must close all Runbook designer consoles and re-launch to see the newly deployed Integration Pack. How it works… The process of deploying an integration pack is simple. The pre-requisite for successfully deploying the IP (loading) is ensuring you have registered a supported IP in the SCO management server. Now we have successfully deployed an Orchestrator Integration Pack. If you have deployed it to a Runbook designer, make sure you close and reopen the designer to be able to use the activities in this Integration Pack. Now your are able to use these activities to build your Runbooks, the only thing you have to do, is to follow our next recipe and configure this Integration Pack. This steps can be used for each single Integration Pack, also deploy multiple OIP with one deployment. There’s more… You have to deploy an OIP to every single Designer and Runbook Server, where you want to work with the Activities. Doesn’t matter if you want to edit a Runbook with the Designer or want to run a Runbook on a special Runbook Server, the OIP has to be deployed to both. With Orchestrator Deployment Manager, this is a easy task to do. Initial Integration Pack configuration This recipe provides the steps required to configure an integration pack for use once it has been successfully deployed to a Runbook designer. Getting ready You must deploy an Orchestrator environment and also deploy the IP you plan to configure to a Runbook designer before following the steps in this recipe. The authors assume the user account performing the installation has administrative privileges on the server nominated for the SCO Runbook designer. How to do it... Each integration pack serves as an interface to the actions SCO can perform in the target environment. In our example we will be focusing on the Active Directory connector. We will have two accounts under two categories of AD tasks in our scenario: IP name Category of actions Account name Active Directory Domain Account Management SCOAD_ACCMGT Active Directory Domain Administrator Management SCOAD_DOMADMIN The following diagram provides a visual summary and order of the tasks you need to perform to complete this recipe: Follow these steps to complete the configuration of the Active Directory IP options in the Runbook Designer: Create or identify an existing account for the IP tasks. In our example we are using two accounts to represent two personas of a typical active directory delegation model. SCOAD_ACCMGT is an account with the rights to perform account management tasks only and SCOAD_DOMADMIN is a domain admin account for elevated tasks in Active Directory. Launch the Runbook Designer as a SCO administrator, select Options from the menu bar, and select the IP to configure (in our example, Active Directory). Click on Add, type AD Account Management in the Name: field, select Microsoft Active Directory Domain Configuration in the Type field by clicking on the. In the Properties section type the following: Configuration User Name: SCOAD_ACCMGT Configuration Password: Enter the password for SCOAD_ACCMGT Configuration Domain Controller Name (FQDN): The FQDN of an accessible domain controller in the target AD (In this example, TLDC01.TRUSTLAB.LOCAL). Configuration Default Parent Container: This is an optional field. Leave it blank: Click on OK. Repeat steps 3 and 4 for the Domain Admin account and click on Finish to complete the configuration. How it works... The IP configuration is unique for each system environment SCO interfaces with for the tasks in scope of automation. The active directory IP configuration grants SCO the rights to perform the actions specified in the Runbook using the activities of the IP. Typical Active Directory activities include, but are not limited to creating user and computer accounts, moving user and computer accounts into organizational units, or deleting user and computer accounts. In our example we created two connection account configurations for the following reasons: Follow the guidance of scoping automation to the rights of the manual processes. If we use the example of a Runbook for creating user accounts we do not need domain admin access. A service desk user performing the same action manually would typically be granted only account management rights in AD. We have more flexibility with delegating management and access to Runbooks. Runbooks with elevated rights through the connection configuration can be separated from Runbooks with lower rights using folder security. The configuration requires planning and understanding of its implication before implementing. Each IP has its own unique options which you must specify before you create Runbooks using the specified IP. The default IPs that you can download from Microsoft include the documentation on the properties you must set. There’s more… As you have seen in this recipe, we need to configure each additional Integration Pack with a Connections String, User and Password. The built in Activities from SCO, are using the Service Account rights to perform this Actions, or you can configure a different User for most of the built in Activities.  See also The official online documentation for Microsoft Integration Packs is updated regularly and should be a point for reference at https://www.microsoft.com/en-us/download/details.aspx?id=54098 The creating and maintaining a security model for Orchestrator in this article expands further on the delegation model in SCO. Summary In this article, we have covered the following: Deploying an Additional Runbook Designer Registering an SCO Integration Pack Deploying an SCO Integration Pack to Runbook Designer and Server Initial Integration Pack Configuration Resources for Article: Further resources on this subject: Deploying the Orchestrator Appliance [article] Unpacking System Center 2012 Orchestrator [article] Planning a Compliance Program in Microsoft System Center 2012 [article]
Read more
  • 0
  • 0
  • 9165

article-image-puppet-server-and-agents
Packt
16 Aug 2017
18 min read
Save for later

Puppet Server and Agents

Packt
16 Aug 2017
18 min read
In this article by Martin Alfke, the author of the book Puppet Essentials - Third Edition, we will following topics: The Puppet server Setting up the Puppet Agent (For more resources related to this topic, see here.) The Puppetserver Many Puppet-based workflows are centered on the server, which is the central source of configuration data and authority. The server hands instructions to all the computer systems in the infrastructure (where agents are installed). It serves multiple purposes in the distributed system of Puppet components. The server will perform the following tasks: Storing manifests and compiling catalogs Serving as the SSL certification authority Processing reports from the agent machines Gathering and storing information about the agents As such, the security of your server machine is paramount. The requirements for hardening are comparable to those of a Kerberos Key Distribution Center. During its first initialization, the Puppet server generates the CA certificate. This self-signed certificate will be distributed among and trusted by all the components of your infrastructure. This is why its private key must be protected very carefully. New agent machines request individual certificates, which are signed with the CA certificate. The terminology around the master software might be a little confusing. That's because both the terms, Puppet Master and Puppet Server, are floating around, and they are closely related too. Let's consider some technological background in order to give you a better understanding of what is what. Puppet's master service mainly comprises a RESTful HTTP API. Agents initiate the HTTPS transactions, with both sides identifying each other using trusted SSL certificates. During the time when Puppet 3 and older versions were current, the HTTPS layer was typically handled by Apache. Puppet's Ruby core was invoked through the Passenger module. This approach offered good stability and scalability. Puppet Inc. has improved upon this standard solution with a specialized software called puppetserver. The Ruby-based core of the master remains basically unchanged, although it now runs on JRuby instead of Ruby's own MRI. The HTTPS layer is run by Jetty, sharing the same Java Virtual Machine with the master. By cutting out some middlemen, puppetserver is faster and more scalable than a Passenger solution. It is also significantly easier to set up. Setting up the server machine Getting the puppetserver software onto a Linux machine is just as simple as the agent package. Packages are available on Red Hat Enterprise Linux and its derivatives,derivatives, Debian and Ubuntu and any other operating system which is supported to run a Puppet server. Until now the Puppet server must run on a Linux based Operating System and can not run on Windows or any other U*ix.U*ix. A great way to get Puppet Inc. packages on any platform is the Puppet Collection. Shortly after the release of Puppet 4, Puppet Inc. created this new way of supplying software. This can be considered as a distribution in its own right. Unlike Linux distributions, it does not contain a Kernel, system tools, and libraries. Instead, it comprises various software from the Puppet ecosystem. Software versions that are available from the same Puppet Collection are guaranteed to work well together. Use the following commands to install puppetserver from the first Puppet Collection (PC1) on a Debian 7 machine. (The Collection for Debian 8 has not yet received a puppetserver package at the time of writing this.) root@puppetmaster# wget http://apt.puppetlabs.com/puppetlabs-release-pc1-jessie.debhttp://apt.puppetlabs.com/puppetlabs-release-pc1-jessie.deb root@puppetmaster# dpkg -i puppetlabs-release-pc1-jessie.debpuppetlabs-release-pc1-jessie.deb root@puppetmaster# apt-get update root@puppetmaster# apt-get install puppetserver The puppetserver package comprises only the Jetty server and the Clojure API, but the all-in-one puppet-agent package is pulled in as a dependency. The package name, puppet-agent, is misleading. This AIO package contains all the parts of Puppet including the master core, a vendored Ruby build, and the several pieces of additional software. Specifically, you can use the puppet command on the master node. You will soon learn how this is useful. However, when using the packages from Puppet Labs, everything gets installed under /opt/puppetlabs. It is advisable to make sure that your PATH variable always includes the /opt/puppetlabs/bin directory so that the puppet command is found here. Regardless of this, once the puppetserver package is installed, you can start the master service: root@puppetmaster# systemctl start puppetserver Depending on the power of your machine, the startup can take a few minutes. Once initialization completes, the server will operate very smoothly though. As soon as the master port 8140 is open, your Puppet master is ready to serve requests. If the service fails to start, there might be an issue with certificate generation. (We observed such issues with some versions of the software.) Check the log file at /var/log/puppetlabs/puppetserver/puppetserver-daemon.log. If it indicates that there are problems while looking up its certificate file, you can work around the problem by temporarily running a standalone master as follows: puppet master –-no-daemonize. After initialization, you can stop this process. The certificate is available now, and puppetserver should now be able to start as well. Another reason for start failures is insufficient amount of memory. The Puppet server process needs 2 GB of memory. Creating the master manifest The master compiles manifests for many machines, but the agent does not get to choose which source file is to be used—this is completely at the master's discretion. The starting point for any compilation by the master is always the site manifest, which can be found in /opt/puppetlabs/code/environments/production/manifests/. Each connecting agent will use all the manifests found here. Of course, you don't want to manage only one identical set of resources on all your machines. To define a piece of manifest exclusively for a specific agent, put it in a node block. This block's contents will only be considered when the calling agent has a matching common name in its SSL certificate. You can dedicate a piece of the manifest to a machine with the name of agent, for example: node 'agent' { $packages = [ 'apache2', 'libapache2-mod-php5', 'libapache2-mod-passenger', ] package { $packages: ensure => 'installed', before => Service['apache2'], } service { 'apache2': ensure => 'running', enable => true, } } Before you set up and connect your first agent to the master, step back and think about how the master should be addressed. By default, agents will try to resolve the unqualified puppet hostname in order to get the master's address. If you have a default domain that is being searched by your machines, you can use this as a default and add a record for puppet as a subdomain (such as puppet.example.net). Otherwise, pick a domain name that seems fitting to you, such as master.example.net or adm01.example.net. What's important is the following: All your agent machines can resolve the name to an address The master process is listening for connections on that address The master uses a certificate with the chosen name as CN or DNS Alt Names The mode of resolution depends on your circumstances—the hosts file on each machine is one ubiquitous possibility. The Puppet server listens on all the available addresses by default. This leaves the task of creating a suitable certificate, which is simple. Configure the master to use the appropriate certificate name and restart the service. If the certificate does not exist yet, Puppet will take the necessary steps to create it. Put the following setting into your /etc/puppetlabs/puppet/puppet.conf file on the master machine: [main] [main] certname=puppetmaster.example.net In Puppet versions before 4.0, the default location for the configuration file is /etc/puppet/puppet.conf. Upon its next start, the master will use the appropriate certificate for all SSL connections. The automatic proliferation of SSL data is not dangerous even in an existing setup, except for the certification authority. If the master were to generate a new CA certificate at any point in time, it would break the trust of all existing agents. Make sure that the CA data is neither lost nor compromised. All previously signed certificates become obsolete whenever Puppet needs to create a new certification authority. The default storage location is /etc/puppetlabs/puppet/ssl/ca for Puppet 4.0 and higher, and /var/lib/puppet/ssl/ca for older versions. Inspecting the configuration settings All the customization of the master's parameters can be made in the puppet.conf file. The operating system packages ship with some settings that are deemed sensible by the respective maintainers. Apart from these explicit settings, Puppet relies on defaults that are either built-in or derived from the : root@puppetmaster # puppet master --configprint manifest /etc/puppetlabs/code/environments/production/manifests Most users will want to rely on these defaults for as many settings as possible. This is possible without any drawbacks because Puppet makes all settings fully transparent using the --configprint parameter. For example, you can find out where the master manifest files are located. To get an overview of all available settings and their values, use the following command: root@puppetmaster# puppet master --configprint all | less While this command is especially useful on the master side, the same introspection is available for puppet apply and puppet agent.  Setting specific configuration entries is possible with the puppet config command: root@puppetmaster # puppet config set –-section main certname puppetmaster.example.net Setting up the Puppet agent As was explained earlier, the master mainly serves instructions to agents in the form of catalogs that are compiled from the manifest. You have also prepared a node block for your first agent in the master manifest. The plain Puppet package that allows you to apply a local manifest contains all the required parts in order to operate a proper agent. If you are using Puppet Labs packages, you need not install the puppetserver package. Just get puppet-agent instead. After a successful package installation, one needs to specify where Puppet agent can find the puppetserver: root@puppetmaster # puppet config set –-section agent server puppetmaster.example.net Afterwards the following invocation is sufficient for an initial test: root@agent# puppet agent --test Info: Creating a new SSL key for agent Error: Could not request certificate: getaddrinfo: Name or service not known Exiting; failed to retrieve certificate and waitforcert is disabled Puppet first created a new SSL certificate key for itself. For its own name, it picked agent, which is the machine's hostname. That's fine for now. An error occurred because the puppet name cannot be currently resolved to anything. Add this to /etc/hosts so that Puppet can contact the master: root@agent# puppet agent --test Info: Caching certificate for ca Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for agent Info: Certificate Request fingerprint (SHA256): 52:65:AE:24:5E:2A:C6:17:E2:5D:0A:C9: 86:E3:52:44:A2:EC:55:AE:3D:40:A9:F6:E1:28:31:50:FC:8E:80:69 Error: Could not request certificate: Error 500 on SERVER: Internal Server Error: java.io.FileNotFoundException: /etc/puppetlabs/puppet/ssl/ca/requests/agent.pem (Permission denied) Exiting; failed to retrieve certificate and waitforcert is disabled Note how Puppet conveniently downloaded and cached the CA certificate. The agent will establish trust based on this certificate from now on. Puppet created a certificate request and sent it to the master. It then immediately tried to download the signed certificate. This is expected to fail—the master won't just sign a certificate for any request it receives. This behavior is important for proper security. There is a configuration setting that enables such automatic signing, but users are generally discouraged from using this setting because it allows the creation of arbitrary numbers of signed (and therefore, trusted) certificates to any user who has network access to the master. To authorize the agent, look for the CSR on the master using the puppet cert command: root@puppetmaster# puppet cert --list "agent" (SHA256) 52:65:AE:24:5E:2A:C6:17:E2:5D:0A:C9:86:E3:52:44:A2:EC:55:AE: 3D:40:A9:F6:E1:28:31:50:FC:8E:80:69 This looks alright, so now you can sign a new certificate for the agent: root@puppetmaster# puppet cert --sign agent Notice: Signed certificate request for agent Notice: Removing file Puppet::SSL::CertificateRequest agent at '/etc/puppetlabs/ puppet/ssl/ca/requests/agent.pem' When choosing the action for puppet cert, the dashes in front of the option name can be omitted—you can just use puppet cert list and puppet cert sign. Now the agent can receive its certificate for its catalog run as follows: root@agent# puppet agent --test Info: Caching certificate for agent Info: Caching certificate_revocation_list for ca Info: Caching certificate for agent Info: Retrieving pluginfacts Info: Retrieving plugin Info: Caching catalog for agent Info: Applying configuration version '1437065761' Notice: Applied catalog in 0.11 seconds The agent is now fully operational. It received a catalog and applied all resources within. Before you read on to learn how the agent usually operates, there is a note that is especially important for the users of Puppet 3. Since this is the common name in the master's certificate, the preceding command will not even work with a Puppet 3.x master. It works with puppetserver and Puppet 4 because the default puppet name is now included in the certificate's Subject Alternative Names by default. It is tidier to not rely on this alias name, though. After all, in production, you will probably want to make sure that the master has a fully qualified name that can be resolved, at least inside your network. You should therefore, add the following to the main section of puppet.conf on each agent machine: [agent] [agent] server=master.example.net In the absence of DNS to resolve this name, your agent will need an appropriate entry in its hosts file or a similar alternative way of address resolution. These steps are necessary in a Puppet 3.x setup. If you have been following along with a Puppet 4 agent, you might notice that after this change, it generates a new Certificate Signing Request: root@agent# puppet agent –test Info: Creating a new SSL key for agent.example.net Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for agent.example.net Info: Certificate Request fingerprint (SHA256): 85:AC:3E:D7:6E:16:62:BD:28:15:B6:18: 12:8E:5D:1C:4E:DE:DF:C3:4E:8F:3E:20:78:1B:79:47:AE:36:98:FD Exiting; no certificate found and waitforcert is disabled If this happens, you will have to use puppet cert sign on the master again.  The agent will then retrieve a new certificate. The agent's life cycle In a Puppet-centric workflow, you typically want all changes to the configuration of servers (perhaps even workstations) to originate on the Puppet master and propagate to the agents automatically. Each new machine gets integrated into the Puppet infrastructure with the master at its center and gets removed during the decommissioning, as shown in the following diagram: Insert image 6566_02_01.png The very first step—generating a key and a certificate signing request—is always performed implicitly and automatically at the start of an agent run if no local SSL data exists yet. Puppet creates the required data if no appropriate files are found. There will be a short description on how to trigger this behavior manually later in this section. The next step is usually the signing of the agent's certificate, which is performed on the master. It is a good practice to monitor the pending requests by listing them on the console: root@puppetmaster# puppet cert list oot@puppetmaster# puppet cert sign '<agent fqdn>' From this point on, the agent will periodically check with the master to load updated catalogs. The default interval for this is 30 minutes. The agent will perform a run of a catalog each time and check the sync state of all the contained resources. The run is performed for unchanged catalogs as well, because the sync states can change between runs. Before you manage to sign the certificate, the agent process will query the master in short intervals for a while. This can avoid a 30 minute delay if the certificate is not ready, right when the agent starts up. Launching this background process can be done manually through a simple command: root@agent# puppet agent However, it is preferable to do this through the puppet system service. When an agent machine is taken out of active service, its certificate should be invalidated. As is customary with SSL, this is done through revocation. The master adds the serial number of the certificate to its certificate revocation list. This list, too, is shared with each agent machine. Revocation is initiated on the master through the puppet cert command: root@puppetmaster# puppet cert revoke agent The updated CRL is not honored until the master service is restarted. If security is a concern, this step must not be postponed. The agent can then no longer use its old certificate: root@agent# puppet agent --test Warning: Unable to fetch my node definition, but the agent run will continue: Warning: SSL_connect SYSCALL returned=5 errno=0 state=unknown state [...] Error: Could not retrieve catalog from remote server: SSL_connect SYSCALL returned=5 errno=0 state=unknown state [...] Renewing an agent's certificate Sometimes, it is necessary during an agent machine's life cycle to regenerate its certificate and related data—the reasons can include data loss, human error, or certificate expiration, among others. Performing the regeneration is quite simple: all relevant files are kept at /etc/puppetlabs/puppet/ssl (for Puppet 3.x, this is /var/lib/puppet/ssl) on the agent machine. Once these files are removed (or rather, the whole ssl/ directory tree), Puppet will renew everything on the next agent run. Of course, a new certificate must be signed. This requires some preparation—just initiating the request from the agent will fail: root@agent# puppet agent –test Info: Creating a new SSL key for agent Info: Caching certificate for ca Info: Caching certificate for agent.example.net Error: Could not request certificate: The certificate retrievedfrom the master does not match the agent's private key. Certificate fingerprint: 6A:9F:12:C8:75:C0:B6:10:45:ED:C3:97:24:CC:98:F2:B6:1A:B5: 4C:E3:98:96:4F:DA:CD:5B:59:E0:7F:F5:E6 The master still has the old certificate cached. This is a simple protection against the impersonation of your agents by unauthorized entities. To fix this, remove the certificate from both the master and the agent and then start a Puppet run, which will automatically regenerate a certificate. On the master: puppet cert clean agent.example.net On the agent: On most platforms: find /etc/puppetlabs/puppet/ssl -name agent.example.net.pem –delete On Windows: del "/etc/puppetlabs/puppet/ssl/agent.example.net.pem" /f puppet agent –t Exiting; failed to retrieve certificate and waitforcert is disabled Once you perform the cleanup operation on the master, as advised in the preceding output, and remove the indicated file from the agent machine, the agent will be able to successfully place its new CSR: root@puppetmaster# puppet cert clean agent Notice: Revoked certificate with serial 18 Notice: Removing file Puppet::SSL::Certificate agent at '/etc/puppetlabs/ puppet/ssl/ca/signed/agent.pem' Removing file Puppet::SSL::Certificate agent at '/etc/puppetlabs/ puppet/ssl/certs/agent.pem'. The rest of the process is identical to the original certificate creation. The agent uploads its CSR to the master, where the certificate is created through the puppet cert sign command. Running the agent from cron There is an alternative way to operate the agent. We covered starting one long-running puppet agent process that does its work in set intervals and then goes back to sleep. However, it is also possible to have cron launch a discrete agent process in the same interval. This agent will contact the master once, run the received catalog, and then terminate. This has several advantages as follows: The agent operating system saves some resources The interval is precise and not subject to skew (when running the background agent, deviations result from the time that elapses during the catalog run), and distributed interval skew can lead to thundering herd effects Any agent crash or an inadvertent termination is not fatal Setting Puppet to run the agent from cron is also very easy to do—with Puppet! You can use a manifest such as the following: service { 'puppet': enable => false } cron { 'puppet-agent-run': user => 'root', command => 'puppet agent --no-daemonize --onetime --logdest=syslog', minute => fqdn_rand(60), hour => absent, } The fqdn_rand function computes a distinct minute for each of your agents. Setting the hour property to absent means that the job should run every hour. Summary In this article we have learned about the Puppet server and also learned how to setting up the Puppet agent.  Resources for Article: Further resources on this subject: Quick start – Using the core Puppet resource types [article] Understanding the Puppet Resources [article] Puppet: Integrating External Tools [article]
Read more
  • 0
  • 0
  • 8867