Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Networking

109 Articles
article-image-managing-ai-security-risks-with-zero-trust-a-strategic-guide
Mark Simos, Nikhil Kumar
29 Nov 2024
15 min read
Save for later

Managing AI Security Risks with Zero Trust: A Strategic Guide

Mark Simos, Nikhil Kumar
29 Nov 2024
15 min read
This article is an excerpt from the book, "Zero Trust Overview and Playbook Introduction", by Mark Simos, Nikhil Kumar. Get started on Zero Trust with this step-by-step playbook and learn everything you need to know for a successful Zero Trust journey with tailored guidance for every role, covering strategy, operations, architecture, implementation, and measuring success. This book will become an indispensable reference for everyone in your organization.IntroductionIn today’s rapidly evolving technological landscape, artificial intelligence (AI) is both a powerful tool and a significant security risk. Traditional security models focused on static perimeters are no longer sufficient to address AI-driven threats. A Zero Trust approach offers the agility and comprehensive safeguards needed to manage the unique and dynamic security risks associated with AI. This article explores how Zero Trust principles can be applied to mitigate AI risks and outlines the key priorities for effectively integrating AI into organizational security strategies.How can Zero Trust help manage AI security risk?A Zero Trust approach is required to effectively manage security risks related to AI. Classic network perimeter-centric approaches are built on more than 20-year-old assumptions of a static technology environment and are not agile enough to keep up with the rapidly evolving security requirements of AI.The following key elements of Zero Trust security enable you to manage AI risk:Data centricity: AI has dramatically elevated the importance of data security and AI requires a data-centric approach that can secure data throughout its life cycle in any location.Zero Trust provides this data-centric approach and the playbooks in this series guide the roles in your organizations through this implementation.Coordinated management of continuous dynamic risk: Like modern cybersecurity attacks, AI continuously disrupts core assumptions of business, technical, and security processes. This requires coordinated management of a complex and continuously changing security risk.Zero Trust solves this kind of problem using agile security strategies, policies, and architecture to manage the continuous changes to risks, tooling, processes, skills, and more. The playbooks in this series will help you make AI risk mitigation real by providing specific guidance on AI security risks for all impacted roles in the organization. Let’s take a look at which specific elements of Zero Trust are most important to managing AI risk.Zero Trust – the top four priorities for managing AI riskManaging AI risk requires prioritizing a few key areas of Zero Trust to address specific unique aspects of AI. The role of specific guidance in each playbook provides more detail on how each role will incorporate AI considerations into their daily work.These priorities follow the simple themes of learn it, use it, protect against it, and work as a team. This is similar to a rational approach for any major disruptive change to any other type of competition or conflict (a military organization learning about a new weapon, professional sports players learning about a new type of equipment or rule change, and so on).The top four priorities for managing AI risk are as follows:1. Learn it – educate everyone and set realistic expectations: The AI capabilities available today are very powerful, affect everyone, and are very different than what people expect them to be. It’s critical to educate every role in the organization, from board members and CEOs to individual contributors, as they all must understand what AI is, what AI really can and cannot do, as well as the AI usage policy and guidelines. Without this, people’s expectations may be wildly inaccurate and lead to highly impactful mistakes that could have easily been avoided.Education and expectation management is particularly urgent for AI because of these factors:Active use in attacks: Attackers are already using AI to impersonate voices, email writing styles, and more.Active use in business processes: AI is freely available for anyone to use. Job seekers are already submitting AI-generated resumes for your jobs that use your posted job descriptions, people are using public AI services to perform job tasks (and potentially disclosing sensitive information), and much more.Realism: The results are very realistic and convincing, especially if you don’t know how good AI is at creating fake images, videos, and text.How can Zero Trust help manage AI security risk?Confusion: Many people don’t have a good frame of reference for it because of the way AI has been portrayed in popular culture (which is very different from the current reality of AI).2. Use it – integrate AI into security: Immediately begin evaluating and integrating AI into your security tooling and processes to take advantage of their increased effectiveness and efficiency. This will allow you to quickly take advantage of this powerful technology to better manage security risk. AI will impact nearly every part of security, including the following:Security risk discovery, assessment, and management processesThreat detection and incident response processesArchitecture and engineering security defensesIntegrating security into the design and operation of systems…and many more3. Protect against it – update the security strategy, policy, and controls: Organizations must urgently update their strategy, policy, architecture, controls, and processes to account for the use of AI technology (by business units, technology teams, security teams, attackers, and more). This helps enable the organization to take full advantage of AI technology while minimizing security risk.The key focus areas should include the following:Plan for attacker use of AI: One of the first impacts most organizations will experience is rapid adoption by attackers to trick your people. Attackers are using AI to get an advantage on target organizations like yours, so you must update your security strategy, threat models, architectures, user education, and more to defend against attackers using AI or targeting you for your data. This should change the organization’s expectations and assumptions for the following aspects:Attacker techniques: Most attackers will experiment with and integrate AI capabilities into their attacks, such as imitating the voices of your colleagues on phone calls, imitating writing styles in phishing emails, creating convincing fake social media pictures and profiles, creating convincing fake company logos and profiles, and more.Attacker objectives: Attackers will target your data, AI systems, and other related assets because of their high value (directly to the attacker and/or to sell it to others). Your human-generated data is a prized high-value asset for training and grounding AI models and your innovative use of AI may be potentially valuable intellectual property, and more.Secure the organization’s AI usage: The organization must update its security strategy, plans, architecture, processes, and tooling to do the following:Secure usage of external AI: Establish clear policies and supporting processes and technology for using external AI systems safelySecure the organization’s AI and related systems: Protect the organization’s AI and related systems against attackersIn addition to protecting against traditional security attacks, the organization will also need to defend against AI-specific attack techniques that can extract source data, make the model generate unsafe or unintended results, steal the design of the AI model itself, and more. The playbooks include more details for each role to help them manage their part of this risk.Take a holistic approach: It’s important to secure the full life cycle and dependencies of the AI model, including the model itself, the data sources used by the model, the application that uses the model, the infrastructure it’s hosted on, third-party operators such as AI platforms, and other integrated components. This should also take a holistic view of the security life cycle to consider identification, protection, detection, response, recovery, and governance.Update acquisition and approval processes: This must be done quickly to ensure new AI technology (and other technology) meets the security, privacy, and ethical practices of the organization. This helps avoid extremely damaging avoidable problems such as transferring ownership of the organization’s data to vendors and other parties. You don’t want other organizations to grow and capture market share from you by using your data. You also want to avoid expensive privacy incidents and security incidents from attackers using your data against you.This should include supply chain risk considerations to mitigate both direct suppliers and Nth party risk (components of direct suppliers that have been sourced from other organizations). Finding and fixing problems later in the process is much more difficult and expensive than correcting them before or during acquisition, so it is critical to introduce these risk mitigations early.4. Work as a team – establish a coordinated AI approach: Set up an internal collaboration community or a formal Center of Excellence (CoE) team to ensure insights, learning, and best practices are being shared rapidly across teams. AI is a fast-moving space and will drive rapid continuous changes across business, technology, and security teams. You must have mechanisms in place to coordinate and collaborate across these different teams in your organization.How will AI impact Zero Trust?Each playbook describes the specific AI impacts and responsibilities for each affected role.AI shared responsibility model: Most AI technology will be a partnership with AI providers, so managing AI and AI security risk will follow a shared responsibility model between you and your AI providers. Some elements of AI security will be handled by the AI provider and some will be the responsibility of your organization (their customer).This is very similar to how cloud responsibility is managed today (and many AI providers are also cloud providers). This is also similar to a business that outsources some or all of its manufacturing, logistics, sales (for example, channel sales), or other business functions.Now, let’s take a look at how AI impacts Zero Trust.How will AI impact Zero Trust?AI will accelerate many aspects of Zero Trust because it dramatically improves the security tooling and people’s ability to use it. AI promises to reduce the burden and effort for important but tedious security tasks such as the following:Helping security analysts quickly query many data sources (without becoming an expert in query languages or tool interfaces)Helping writing incident response reportsIdentifying common follow-up actions to prevent repeat incidentSimplifying the interface between people and the complex systems they need to use for security will enable people with a broad range of skills to be more productive. Highly skilled people will be able to do more of what they are best at without repetitive and distracting tasks. People earlier in their careers will be able to quickly become more productive in a role, perform tasks at an expert level more quickly, and help them learn by answering questions and providing explanations.AI will NOT replace the need for security experts, nor the need to modernize security. AI will simplify many security processes and will allow fewer security people to do more, but it won’t replace the need for a security mindset or security expertise.Even with AI technology, people and processes will still be required for the following aspects:Ask the right security questions from AI systemsInterpret the results and evaluate their accuracyTake action on the AI results and coordinate across teamsPerform analysis and tasks that AI systems currently can’t cover:Identify, manage, and measure security risk for the organizationBuild, execute, and monitor a strategy and policyBuild and monitor relationships and processes between teamsIntegrate business, technical, and security capabilitiesEvaluate compliance requirements and ensure the organization is meeting them in good faithEvaluate the security of business and technical processesEvaluate the security posture and prioritize mitigation investmentsEvaluate the effectiveness of security processes, tools, and systemsPlan and implement security for technical systemsPlan and implement security for applications and productsRespond to and recover from attacksIn summary, AI will rapidly transform the attacks you face as well as your organization’s ability to manage security risk effectively. AI will require a Zero Trust approach and it will also help your teams do their jobs faster and more efficiently.The guidance in the Zero Trust Playbook Series will accelerate your ability to manage AI risk by guiding everyone through their part. It will help you rapidly align security to business risks and priorities and enable the security agility you need to effectively manage the changes from AI.Some of the questions that naturally come up are where to start and what to do first.ConclusionAs AI reshapes the cybersecurity landscape, adopting a Zero Trust framework is critical to effectively manage the associated risks. From securing data lifecycles to adapting to dynamic attacker strategies, Zero Trust principles provide the foundation for agile and robust AI risk management. By focusing on education, integration, protection, and collaboration, organizations can harness the benefits of AI while mitigating its risks. The Zero Trust Playbook Series offers practical guidance for all roles, ensuring security remains aligned with business priorities and prepared for the challenges AI introduces. Now is the time to embrace this transformative approach and future-proof your security strategies.Author BioMark Simos helps individuals and organizations meet cybersecurity, cloud, and digital transformation goals. Mark is the Lead Cybersecurity Architect for Microsoft where he leads the development of cybersecurity reference architectures, strategies, prescriptive planning roadmaps, best practices, and other security and Zero Trust guidance. Mark also co-chairs the Zero Trust working group at The Open Group and contributes to open standards and other publications like the Zero Trust Commandments. Mark has presented at numerous conferences including Black Hat, RSA Conference, Gartner Security & Risk Management, Microsoft Ignite and BlueHat, and Financial Executives International.Nikhil Kumar is Founder at ApTSi with prior leadership roles at Price Waterhouse and other firms. He has led setup and implementation of Digital Transformation and enterprise security initiatives (such as PCI Compliance) and built out Security Architectures. An Engineer and Computer Scientist with a passion for biology, Nikhil is an expert in Security, Information, and Computer Architecture. Known for communicating to the board and implementing with engineers and architects, he is an MIT mentor, innovator and pioneer. Nikhil has authored numerous books, standards, and articles, and presented at conferences globally. He co-chairs The Zero Trust Working Group, a global standards initiative led by the Open Group.
Read more
  • 1
  • 0
  • 69798

article-image-using-ipv6-packet-tracer
Packt
13 Jan 2014
6 min read
Save for later

Using IPv6 on Packet Tracer

Packt
13 Jan 2014
6 min read
This article is written by Jesin A the author of Packet Tracer Network Simulator. Cisco Packet Tracer is a powerful network simulation program and provides simulation, visualization, authoring, assessment, and shows collaboration capabilities of a network. This article explains the IPv6 addresses used in Packet Tracer. IPv4 has 4.3 billion addresses, which may seem mindboggling. However, it took only two decades for it to reach its depletion. IPv6 has come to the rescue in the form of 128-bit addresses. Packet Tracer supports a wide array of IPv6 features. We'll start by learning how to assign IP addresses to different devices and how to configure routing between them. Finally, we'll create a setup that enables IPv6 communication over IPv4 devices. Assigning IPv6 addresses Starting from Packet Trace Version 6, the IP Configuration utility under the Desktop tab of end devices has an option to enter an IPv6 address. Let's begin with a simple topology consisting of two PCs and a router connected to a switch, as shown in the following screenshot: There are three ways of assigning IPv6 addresses to a device and we'll see each one of them. Autoconfiguration Autoconfiguration requires the least amount of configuration but makes it difficult to remember the IPv6 addresses. This method uses the MAC address of the device to create an IPv6 address with the FE80:: prefix. Carry out the following steps to assign IPv6 addresses using Autoconfiguration: Begin by configuring the router. Enter the interface configuration mode and enable IPv6 on the interface. R0(config)#ipv6 unicast-routing R0(config)#interface FastEthernet0/0 R0(config-if)#ipv6 enable Next, we will configure a link local address and a global unicast address on this interface. We'll use eui-64 to reduce the configuration. R0(config-if)#ipv6 address autoconfig R0(config-if)#ipv6 add 2000::/64 eui-64 R0(config-if)#no shutdown Verify that the interface is up and has two IPv6 addresses. R0>sh ipv6 interface brief FastEthernet0/0 [up/up] FE80::2D0:58FF:FE65:E701 2000::2D0:58FF:FE65:E701 These IPv6 addresses may vary when you try them out, as they are based on the MAC address. Enable routing so that this router can be identified as a default gateway. R0(config)#ipv6 unicast-routing The configuration of the router is now done, let's move on to the PCs. Go to the Desktop tab of the PC, open IP Configuration , and under the IPv6 Configuration section, choose Auto Config . The gateway and the PC's IP address will be assigned automatically, as shown in the following screenshot: Use the simple PDU tool to test the connectivity; you'll see ICMPv6 packets moving between the nodes. To view the IPv6 address from the command line of PCs, use the ipv6config command. Static IPv6 IPv6 addresses can also be assigned statically on all devices. We'll use the same topology for this section too. We'll carry out the following steps to configure IPv6 addresses statically: Begin by configuring a static IPv6 address on the router. R0(config)#interface fastethernet0/0 R0(config-if)#ipv6 enable R0(config-if)#ipv6 address 2000::1/64 R0(config-if)#no shutdown Go to the Desktop tab of PC, open the IP Configuration utility, and enter an IPv6 address with the same prefix. Now use the simple PDU tool to test the connectivity. Once both the methods work fine, you can have a look at the IPv6 neighbors table. This is similar to the ARP table of IPv4. R0#sh ipv6 neighbor IPv6 Address Age Link-layer Addr State Interface 2000::2 0 00E0.A39E.05C4 REACH Fa0/0 2000::3 0 0001.43B9.0268 REACH Fa0/0 Now that we have configured IPv6 addresses on a single network, let's configure them on more networks and enable routing between them. IPv6 static and dynamic routing Similar to IPv4, IPv6 too supports both static and dynamic routing. Configuration commands for its static routing are similar to IPv4. Static routing Modifying the same topology that we used previously, let's add a router, switch, and two PCs to create a separate network, as shown in the following screenshot: The first network will use addresses starting from 2000:1::/64 and the second network will use addresses starting from 2000:2::/64. The link between both the routers will have IP addresses 2001::10/64 and 2001::20/64. Here is a table describing the topology: Device Interface IP address R1 FastEthernet0/0 2000:1::1/64   FastEthernet0/1 2001::10/64 PC0 FastEthernet 2000:1::2/64 PC1 FastEthernet 2000:1::3/64 R2 FastEthernet0/0 2000:2::1/64   FastEthernet0/1 2001::20/64 PC2 FastEthernet 2000:2::2/64 PC3 FastEthernet 2000:2::3/64 After the necessary IP addresses and gateways have been assigned, open the CLI tab for the R1 router, and start configuring routing by following the given commands: R1(config)#ipv6 unicast-routing R1(config)#ipv6 route 2000:2::/64 2001::20 Next, open the CLI tab for R2 and configure routing on it. R2(config)#ipv6 unicast-routing R2(config)#ipv6 route 2000:1::/64 2001::10 Now use the simple PDU tool to test the connectivity. You may also use the tracert command on a PC to see the path a packet takes. PC>tracert 2000:2::3 Tracing route to 2000:2::3 over a maximum of 30 hops: 1 63 ms 63 ms 47 ms 2000:1::1 2 94 ms 78 ms 94 ms 2001::20 3 156 ms 109 ms 129 ms 2000:2::3 Trace complete. Dynamic routing Packet Tracer offers the same dynamic routing protocols for IPv6: RIPv6, EIGRP, and OSPF. We'll be configuring RIPv6 in this section. Note that RIPv6 does not represent RIP Version 6; it is RIP for IPv6 addresses. For this exercise, we'll use the topology shown in the following screenshot: The additional IP assignment details alone are shown in the following table: Device Interface IPv6 Address R2 FastEthernet1/0 2001:1::10/64 R3 FastEthernet0/0 2000:3::1/64   FastEthernet0/1 2001:1::20/64 PC2 FastEthernet 2000:3::2/64 We'll see how to configure RIP on one router and you can do the same on the others. R1(config)#interface FastEthernet0/0 R1(config-if)#ipv6 address 2000:1::1/64 R1(config-if)#ipv6 rip Net1 enable R1(config-if)#ipv6 enable R1(config-if)#interface FastEthernet0/1 R1(config-if)#ipv6 address 2001::10/64 R1(config-if)#ipv6 rip Net1 enable R1(config-if)#ipv6 enable Note that the ipv6 rip command is used to enable RIP on a particular interface. Entering ipv6 rip Net1 enable on the first interface begins the RIPv6 process. The Net1 string can be any name that can be used to name the RIP process. Once configured, use the usual diagnostic tools (ping to simple PDU) to check the connectivity. To view the RIP database, use the following command: R1#sh ipv6 rip database RIP process "Net1" local RIB 2000:2::/64, metric 2, installed FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec 2000:3::/64, metric 3, installed FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec 2001::/64, metric 2 FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec 2001:1::/64, metric 2, installed FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec RIP process "LINK" local RIB Trace the route of the packet to see the path it takes. PC>tracert 2000:3::2 Tracing route to 2000:3::2 over a maximum of 30 hops: 1 31 ms 32 ms 31 ms 2000:1::1 2 50 ms 50 ms 63 ms 2001::20 3 94 ms 94 ms 94 ms 2001:1::20 4 125 ms 109 ms 125 ms 2000:3::2 Trace complete. Summary In this article, we learned how to use IPv6 with Packet Tracer. We saw the limitation of the IPv4 addresses. We also learned how to assign IPv6 addresses and how to configure IPv6 static and dynamic routing. Resources for Article : How to edit the attributes in QGIS Troubleshooting OpenStack Compute problems Creating Identity and Resource Pools in Cisco Unified Computing System
Read more
  • 0
  • 0
  • 61806

article-image-wireshark-analyze-malicious-emails-in-pop-imap-smtp
Vijin Boricha
29 Jul 2018
10 min read
Save for later

Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial]

Vijin Boricha
29 Jul 2018
10 min read
One of the contributing factors in the evolution of digital marketing and business is email. Email allows users to exchange real-time messages and other digital information such as files and images over the internet in an efficient manner. Each user is required to have a human-readable email address in the form of username@domainname.com. There are various email providers available on the internet, and any user can register to get a free email address. There are different email application-layer protocols available for sending and receiving mails, and the combination of these protocols helps with end-to-end email exchange between users in the same or different mail domains. In this article, we will look at the normal operation of email protocols and how to use Wireshark for basic analysis and troubleshooting. This article is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. The three most commonly used application layer protocols are POP3, IMAP, and SMTP: POP3: Post Office Protocol 3 (POP3) is an application layer protocol used by email systems to retrieve mail from email servers. The email client uses POP3 commands such as LOGIN, LIST, RETR, DELE, QUIT to access and manipulate (retrieve or delete) the email from the server. POP3 uses TCP port 110 and wipes the mail from the server once it is downloaded to the local client. IMAP: Internet Mail Access Protocol (IMAP) is another application layer protocol used to retrieve mail from the email server. Unlike POP3, IMAP allows the user to read and access the mail concurrently from more than one client device. With current trends, it is very common to see users with more than one device to access emails (laptop, smartphone, and so on), and the use of IMAP allows the user to access mail any time, from any device. The current version of IMAP is 4 and it uses TCP port 143. SMTP: Simple Mail Transfer Protocol (SMTP) is an application layer protocol that is used to send email from the client to the mail server. When the sender and receiver are in different email domains, SMTP helps to exchange the mail between servers in different domains. It uses TCP port 25: As shown in the preceding diagram, SMTP is the email client used to send the mail to the mail server, and POP3 or IMAP is used to retrieve the email from the server. The email server uses SMTP to exchange the mail between different domains. In order to maintain the privacy of end users, most email servers use different encryption mechanisms at the transport layer. The transport layer port number will differ from the traditional email protocols if they are used over secured transport layer (TLS). For example, POP3 over TLS uses TCP port 995, IMAP4 over TLS uses TCP port 993, and SMTP over TLS uses port 465. Normal operation of mail protocols As we saw above, the common mail protocols for mail client to server and server to server communication are POP3, SMTP, and IMAP4. Another common method for accessing emails is web access to mail, where you have common mail servers such as Gmail, Yahoo!, and Hotmail. Examples include Outlook Web Access (OWA) and RPC over HTTPS for the Outlook web client from Microsoft. In this recipe, we will talk about the most common client-server and server-server protocols, POP3 and SMTP, and the normal operation of each protocol. Getting ready Port mirroring to capture the packets can be done either on the email client side or on the server side. How to do it... POP3 is usually used for client to server communications, while SMTP is usually used for server to server communications. POP3 communications POP3 is usually used for mail client to mail server communications. The normal operation of POP3 is as follows: Open the email client and enter the username and password for login access. Use POP as a display filter to list all the POP packets. It should be noted that this display filter will only list packets that use TCP port 110. If TLS is used, the filter will not list the POP packets. We may need to use tcp.port == 995 to list the POP3 packets over TLS. Check the authentication has been passed correctly. In the following screenshot, you can see a session opened with a username that starts with doronn@ (all IDs were deleted) and a password that starts with u6F. To see the TCP stream shown in the following screenshot, right-click on one of the packets in the stream and choose Follow TCP Stream from the drop-down menu: Any error messages in the authentication stage will prevent communications from being established. You can see an example of this in the following screenshot, where user authentication failed. In this case, we see that when the client gets a Logon failure, it closes the TCP connection: Use relevant display filters to list the specific packet. For example, pop.request.command == "USER" will list the POP request packet with the username and pop.request.command == "PASS" will list the POP packet carrying the password. A sample snapshot is as follows: During the mail transfer, be aware that mail clients can easily fill a narrow-band communications line. You can check this by simply configuring the I/O graphs with a filter on POP. Always check for common TCP indications: retransmissions, zero-window, window-full, and others. They can indicate a busy communication line, slow server, and other problems coming from the communication lines or end nodes and servers. These problems will mostly cause slow connectivity. When the POP3 protocol uses TLS for encryption, the payload details are not visible. We explain how the SSL captures can be decrypted in the There's more... section. IMAP communications IMAP is similar to POP3 in that it is used to retrieve the mail from the server by the client. The normal behavior of IMAP communication is as follows: Open the email client and enter the username and password for the relevant account. Compose a new message and send it from any email account. Retrieve the email on the client that is using IMAP. Different clients may have different ways of retrieving the email. Use the relevant button to trigger it. Check you received the email on your local client. SMTP communications SMTP is commonly used for the following purposes: Server to server communications, in which SMTP is the mail protocol that runs between the servers In some clients, POP3 or IMAP4 are configured for incoming messages (messages from the server to the client), while SMTP is configured for outgoing messages (messages from the client to the server) The normal behavior of SMTP communication is as follows: The local email client resolves the IP address of the configured SMTP server address. This triggers a TCP connection to port number 25 if SSL/TLS is not enabled. If SSL/TLS is enabled, a TCP connection is established over port 465. It exchanges SMTP messages to authenticate with the server. The client sends AUTH LOGIN to trigger the login authentication. Upon successful login, the client will be able to send mails. It sends SMTP message such as "MAIL FROM:<>", "RCPT TO:<>" carrying sender and receiver email addresses. Upon successful queuing, we get an OK response from the SMTP server. The following is a sample SMTP message flow between client and server: How it works... In this section, let's look into the normal operation of different email protocols with the use of Wireshark. Mail clients will mostly use POP3 for communication with the server. In some cases, they will use SMTP as well. IMAP4 is used when server manipulation is required, for example, when you need to see messages that exist on a remote server without downloading them to the client. Server to server communication is usually implemented by SMTP. The difference between IMAP and POP is that in IMAP, the mail is always stored on the server. If you delete it, it will be unavailable from any other machine. In POP, deleting a downloaded email may or may not delete that email on the server. In general, SMTP status codes are divided into three categories, which are structured in a way that helps you understand what exactly went wrong. The methods and details of SMTP status codes are discussed in the following section. POP3 POP3 is an application layer protocol used by mail clients to retrieve email messages from the server. A typical POP3 session will look like the following screenshot: It has the following steps: The client opens a TCP connection to the server. The server sends an OK message to the client (OK Messaging Multiplexor). The user sends the username and password. The protocol operations begin. NOOP (no operation) is a message sent to keep the connection open, STAT (status) is sent from the client to the server to query the message status. The server answers with the number of messages and their total size (in packet 1042, OK 0 0 means no messages and it has a total size of zero) When there are no mail messages on the server, the client send a QUIT message (1048), the server confirms it (packet 1136), and the TCP connection is closed (packets 1137, 1138, and 1227). In an encrypted connection, the process will look nearly the same (see the following screenshot). After the establishment of a connection (1), there are several POP messages (2), TLS connection establishment (3), and then the encrypted application data: IMAP The normal operation of IMAP is as follows: The email client resolves the IP address of the IMAP server: As shown in the preceding screenshot, the client establishes a TCP connection to port 143 when SSL/TSL is disabled. When SSL is enabled, the TCP session will be established over port 993. Once the session is established, the client sends an IMAP capability message requesting the server sends the capabilities supported by the server. This is followed by authentication for access to the server. When the authentication is successful, the server replies with response code 3 stating the login was a success: The client now sends the IMAP FETCH command to fetch any mails from the server. When the client is closed, it sends a logout message and clears the TCP session. SMTP The normal operation of SMTP is as follows: The email client resolves the IP address of the SMTP server: The client opens a TCP connection to the SMTP server on port 25 when SSL/TSL is not enabled. If SSL is enabled, the client will open the session on port 465: Upon successful TCP session establishment, the client will send an AUTH LOGIN message to prompt with the account username/password. The username and password will be sent to the SMTP client for account verification. SMTP will send a response code of 235 if authentication is successful: The client now sends the sender's email address to the SMTP server. The SMTP server responds with a response code of 250 if the sender's address is valid. Upon receiving an OK response from the server, the client will send the receiver's address. SMTP server will respond with a response code of 250 if the receiver's address is valid. The client will now push the actual email message. SMTP will respond with a response code of 250 and the response parameter OK: queued. The successfully queued message ensures that the mail is successfully sent and queued for delivery to the receiver address. We have learned how to analyse issues in POP, IMAP, and SMTP  and malicious emails. Get to know more about  DNS Protocol Analysis and FTP, HTTP/1, AND HTTP/2 from our book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6? Analyzing enterprise application behavior with Wireshark 2 Capturing Wireshark Packets
Read more
  • 0
  • 0
  • 60094

article-image-windows-powershell-desired-state-configuration-video
Fatema Patrawala
16 Jul 2018
1 min read
Save for later

Scripting with Windows Powershell Desired State Configuration [Video]

Fatema Patrawala
16 Jul 2018
1 min read
https://www.youtube.com/watch?v=H3jqgto5Rk8&list=PLTgRMOcmRb3OpgM9tsUjuI3MgLCHDJ3oM&index=4 What is Desired State Configuration? Powershell Desired State Configuration (DSC) is really a powerful way of scripting. It is a declarative model of scripting, instead of you defining Powershell exactly each and every step to get from point A to point B. You only need to describe what point B is and Powershell takes care of it before anything. The biggest benefit is that we get to define our configuration, our infrastructures, our servers as a code. Desired State Configuration in Powershell can really be achieved through 3 simple steps: Create the Configuration Compile the Configuration into a MoF file Deploy the Configuration What will you need to run Powershell DSC? Thankfully we do not need a whole lot, Powershell comes with it built-in. So, for managing Windows systems with DSC you are going to need modern version of Powershell, that is: Windows 4.0, 5.0, 5.1 Powershell DSC for Linux is available Currently limited support for Powershell Core Exploring Windows PowerShell 5.0 Introducing PowerShell Remoting Managing Nano Server with Windows PowerShell and Windows PowerShell DSC    
Read more
  • 0
  • 0
  • 46474

article-image-opendaylight-fundamentals
Packt
05 Jul 2017
14 min read
Save for later

OpenDaylight Fundamentals

Packt
05 Jul 2017
14 min read
In this article by Jamie Goodyear, Mathieu Lemay, Rashmi Pujar, Yrineu Rodrigues, Mohamed El-Serngawy, and Alexis de Talhouët the authors of the book OpenDaylight Cookbook, we will be covering the following recipes: Connecting OpenFlow switches Mounting a NETCONF device Browsing data models with Yang UI (For more resources related to this topic, see here.) OpenDaylight is a collaborative platform supported by leaders in the networking industry and hosted by the Linux Foundation. The goal of the platform is to enable the adoption of software-defined networking (SDN) and create a solid base for network functions virtualization (NFV). Connecting OpenFlow switches OpenFlow is a vendor-neutral standard communications interface defined to enable the interaction between the control and forwarding channels of an SDN architecture. The OpenFlow plugin project intends to support implementations of the OpenFlow specification as it evolves. It currently supports OpenFlow versions 1.0 and 1.3.2. In addition, to support the core OpenFlow specification, OpenDaylight Beryllium also includes preliminary support for the Table Type Patterns and OF-CONFIG specifications. The OpenFlow southbound plugin currently provides the following components: Flow management Group management Meter management Statistics polling Let's connect an OpenFlow switch to OpenDaylight. Getting ready This recipe requires an OpenFlow switch. If you don't have any, you can use a mininet-vm with OvS installed. You can download mininet-vm from the website: https://github.com/mininet/mininet/wiki/Mininet-VM-Images. Any version should work The following recipe will be presented using a mininet-vm with OvS 2.0.2. How to do it... Start the OpenDaylight distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an OpenFlow switch: opendaylight-user@root>feature:install odl-openflowplugin-all It might take a minute or so to complete the installation. Connect an OpenFlow switch to OpenDaylight.we will use mininet-vm as our OpenFlow switch as this VM runs an instance of OpenVSwitch:     Login to mininet-vm using:  Username: mininet   Password: mininet    Let's create a bridge: mininet@mininet-vm:~$ sudo ovs-vsctl add-br br0 Now let's connect OpenDaylight as the controller of br0: mininet@mininet-vm:~$ sudo ovs-vsctl set-controller br0 tcp: ${CONTROLLER_IP}:6633    Let's look at our topology: mininet@mininet-vm:~$ sudo ovs-vsctl show 0b8ed0aa-67ac-4405-af13-70249a7e8a96 Bridge "br0" Controller "tcp: ${CONTROLLER_IP}:6633" is_connected: true Port "br0" Interface "br0" type: internal ovs_version: "2.0.2" ${CONTROLLER_IP} is the IP address of the host running OpenDaylight. We're establishing a TCP connection. Have a look at the created OpenFlow node.Once the OpenFlow switch is connected, send the following request to get information regarding the switch:    Type: GET    Headers:Authorization: Basic YWRtaW46YWRtaW4=    URL: http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/ This will list all the nodes under opendaylight-inventory subtree of MD-SAL that store OpenFlow switch information. As we connected our first switch, we should have only one node there. It will contain all the information the OpenFlow switch has, including its tables, its ports, flow statistics, and so on. How it works... Once the feature is installed, OpenDaylight is listening to connection on port 6633 and 6640. Setting up the controller on the OpenFlow-capable switch will immediately trigger a callback on OpenDaylight. It will create the communication pipeline between the switch and OpenDaylight so they can communicate in a scalable and non-blocking way. Mounting a NETCONF device The OpenDaylight component responsible to connect remote NETCONF devices is called the NETCONF southbound plugin aka the netconf-connector. Creating an instance of the netconf-connector will connect a NETCONF device. The NETCONF device will be seen as a mount point in the MD-SAL, exposing the device configuration and operational datastore and its capabilities. These mount points allow applications and remote users (over RESTCONF) to interact with the mounted devices. The netconf-connector currently supports the RFC-6241, RFC-5277 and RFC-6022. The following recipe will explain how to connect a NETCONF device to OpenDaylight. Getting ready This recipe requires a NETCONF device. If you don't have any, you can use the NETCONF test tool provided by OpenDaylight. It can be downloaded from the OpenDaylight Nexus repository: https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/netconf/netconf-testtool/1.0.4-Beryllium-SR4/netconf-testtool-1.0.4-Beryllium-SR4-executable.jar How to do it... Start OpenDaylight karaf distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an NETCONF device: opendaylight-user@root>feature:install odl-netconf-topology odl-restconf It might take a minute or so to complete the installation. Start your NETCONF device.If you want to use the NETCONF test tool, it is time to simulate a NETCONF device using the following command: $ java -jar netconf-testtool-1.0.1-Beryllium-SR4-executable.jar --device-count 1 This will simulate one device that will be bound to port 17830. Configure a new netconf-connectorSend the following request using RESTCONF:    Type: PUT    URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-deviceBy looking closer at the URL, you will notice that the last part is new-netconf-device. This must match the node-id that we will define in the payload. Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= Payload: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> </node> Let's have a closer look at this payload:    node-id: Defines the name of the netconf-connector.    address: Defines the IP address of the NETCONF device.    port: Defines the port for the NETCONF session.    username: Defines the username of the NETCONF session. This should be provided by the NETCONF device configuration.    password: Defines the password of the NETCONF session. As for the username, this should be provided by the NETCONF device configuration.    tcp-only: Defines whether or not the NETCONF session should use tcp or ssl. If set to true it will use tcp. This is the default configuration of the netconf-connector; it actually has more configurable elements that will be present in a second part. Once you have completed the request, send it. This will spawn a new netconf-connector that connects to the NETCONF device at the provided IP address and port using the provided credentials. Verify that the netconf-connector has correctly been pushed and get information about the connected NETCONF device.First, you could look at the log to see if any error occurred. If no error has occurred, you will see: 2016-05-07 11:37:42,470 | INFO | sing-executor-11 | NetconfDevice | 253 - org.opendaylight.netconf.sal-netconf-connector - 1.3.0.Beryllium | RemoteDevice{new-netconf-device}: Netconf connector initialized successfully Once the new netconf-connector is created, some useful metadata are written into the MD-SAL's operational datastore under the network-topology subtree. To retrieve this information, you should send the following request: Type: GET Headers: Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device We're using new-netconf-device as the node-id because this is the name we assigned to the netconf-connector in a previous step. This request will provide information about the connection status and device capabilities. The device capabilities are all the yang models the NETCONF device is providing in its hello-message that was used to create the schema context. More configuration for the netconf-connectorAs mentioned previously, the netconf-connector contains various configuration elements. Those fields are non-mandatory, with default values. If you do not wish to override any of these values, you shouldn't provide them. schema-cache-directory: This corresponds to the destination schema repository for yang files downloaded from the NETCONF device. By default, those schemas are saved in the cache directory ($ODL_ROOT/cache/schema). Using this configuration will define where to save the downloaded schema related to the cache directory. For instance, if you assigned new-schema-cache, schemas related to this device would be located under $ODL_ROOT/cache/new-schema-cache/. reconnect-on-changed-schema: If set to true, the connector will auto disconnect/reconnect when schemas are changed in the remote device. The netconf-connector will subscribe to base NETCONF notifications and listens for netconf-capability-change notification. Default value is false. connection-timeout-millis: Timeout in milliseconds after which the connection must be established. Default value is 20000 milliseconds.  default-request-timeout-millis: Timeout for blocking operations within transactions. Once this timer is reached, if the request is not yet finished, it will be canceled. Default value is 60000 milliseconds.  max-connection-attempts: Maximum number of connection attempts. Non-positive or null value is interpreted as infinity. Default value is 0, which means it will retry forever.  between-attempts-timeout-millis: Initial timeout in milliseconds between connection attempts. This will be multiplied by the sleep-factor for every new attempt. Default value is 2000 milliseconds.  sleep-factor: Back-off factor used to increase the delay between connection attempt(s). Default value is 1.5.  keepalive-delay: Netconf-connector sends keep alive RPCs while the session is idle to ensure session connectivity. This delay specifies the timeout between keep alive RPC in seconds. Providing a 0 value will disable this mechanism. Default value is 120 seconds. Using this configuration, your payload would look like this: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> <schema-cache-directory >new_netconf_device_cache</schema-cache-directory> <reconnect-on-changed-schema >false</reconnect-on-changed-schema> <connection-timeout-millis >20000</connection-timeout-millis> <default-request-timeout-millis >60000</default-request-timeout-millis> <max-connection-attempts >0</max-connection-attempts> <between-attempts-timeout-millis >2000</between-attempts-timeout-millis> <sleep-factor >1.5</sleep-factor> <keepalive-delay >120</keepalive-delay> </node> How it works... Once the request to connect a new NETCONF device is sent, OpenDaylight will setup the communication channel, used for managing, interacting with the device. At first, the remote NETCONF device will send its hello-message defining all of the capabilities it has. Based on this, the netconf-connector will download all the YANG files provided by the device. All those YANG files will define the schema context of the device. At the end of the process, some exposed capabilities might end up as unavailable, for two possible reasons: The NETCONF device provided a capability in its hello-message but hasn't provided the schema. ODL failed to mount a given schema due to YANG violation(s). OpenDaylight parses YANG models as per as the RFC 6020; if a schema is not respecting the RFC, it could end up as an unavailable-capability. If you encounter one of these situations, looking at the logs will pinpoint the reason for such a failure. There's more... Once the NETCONF device is connected, all its capabilities are available through the mount point. View it as a pass-through directly to the NETCONF device. Get datastore To see the data contained in the device datastore, use the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/ Adding yang-ext:mount/ to the URL will access the mount point created for new-netconf-device. This will show the configuration datastore. If you want to see the operational one, replace config by operational in the URL. If your device defines yang model, you can access its data using the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<container> The <module> represents a schema defining the <container>. The <container> can either be a list or a container. It is not possible to access a single leaf. You can access containers/lists within containers/lists. The last part of the URL would look like this: …/ yang-ext:mount/<module>:<container>/<sub-container> Invoke RPC In order to invoke an RPC on the remote device, you should use the following request: Type: POST Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<operation> This URL is accessing the mount point of new-netconf-device, and through this mount point we're accessing the <module> to call its <operation>. The <module> represents a schema defining the RPC and <operation> represents the RPC to call. Delete a netconf-connector Removing a netconf-connector will drop the NETCONF session and all resources will be cleaned. To perform such an operation, use the following request: Type: DELETE Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device By looking closer to the URL, you can see that we are removing the NETCONF node-id new-netconf-device. Browsing data models with Yang UI Yang UI is a user interface application through which one can navigate among all yang models available in the OpenDaylight controller. Not only does it aggregate all data models, it also enables their usage. Using this interface, you can create, remove, update, and delete any part of the model-driven datastore. It provides a nice, smooth user interface making it easier to browse through the model(s). This recipe will guide you through those functionalities. Getting ready This recipe only requires the OpenDaylight controller and a web-browser. How to do it... Start your OpenDaylight distribution using the karaf script. Using this client will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible to pull in all dependencies needed to use Yang UI: opendaylight-user@root>feature:install odl-dlux-yangui It might take a minute or so to complete the installation. Navigate to http://localhost:8181/index.html#/yangui/index.Username: admin Password: admin Once logged in, all modules will be loading until you can see this message at the bottom of the screen: Loading completed successfully You should see the API tab listing all yang models in the following format: <module-name> rev.<revision-date> For instance:     cluster-admin rev.2015-10-13     config rev.2013-04-05     credential-store rev.2015-02-26 By default, there isn't much you can do with the provided yang models. So, let's connect an OpenFlow switch to better understand how to use this Yang UI. Once done, refresh your web page to load newly added modules. Look for opendaylight-inventory rev.2013-08-19 and select the operational tab, as nothing will yet be in the config datastore. Then click on nodes and you'll see a request bar at the bottom of the page with multiple options.You can either copy the request to the clipboard to use it on your browser, send it, show a preview of it, or define a custom API request. For now, we will only send the request. You should see Request sent successfully and under this message should be the retrieved data. As we only have one switch connected, there is only one node. All the switch operational information is now printed on your screen. You could do the same request by specifying the node-id in the request. To do that you will need to expand nodes and click on node {id}, which will enable a more fine-grained search. How it works... OpenDaylight has a model-driven architecture, which means that all of its components are modeled using YANG. While installing features, OpenDaylight loads YANG models, making them available within the MD-SAL datastore. YangUI is a representation of this datastore. Each schema represents a subtree based on the name of the module and its revision-date. YangUI aggregates and parses all those models. It also acts as a REST client; through its web interface we can execute functions such as GET, POST, PUT, and DELETE. There's more… The example shown previously can be improved on, as there was no user yang model loaded. For instance, if you mount a NETCONF device containing its own yang model, you could interact with it through YangUI. You would use the config datastore to push/update some data, and you would see the operational datastore updated accordingly. In addition, accessing your data would be much easier than having to define the exact URL. See also Using API doc as a REST API client. Summary Throughout this article, we learned recipes such as connecting OpenFlow switches, mounting a NETCONF device, browsing data models with Yang UI. Resources for Article: Further resources on this subject: Introduction to SDN - Transformation from legacy to SDN [article] The OpenFlow Controllers [article] Introduction to SDN - Transformation from legacy to SDN [article]
Read more
  • 0
  • 0
  • 42607

article-image-installing-arch-linux-using-official-iso
Packt
19 Feb 2013
7 min read
Save for later

Installing Arch Linux using the official ISO

Packt
19 Feb 2013
7 min read
(For more resources related to this topic, see here.) Getting ready You can get the official ISO image file from https://www.archlinux.org/download/. On this page you will find a download link to the latest release. Depending on your preference, download the torrent file or the ISO image file immediately. The following list describes the main tasks that we will perform in this recipe: Preparing, booting, and setting keyboard layout: We are going to get the ISO file from the download page of the Arch Linux website and store it on the preferred media of our choice. At the time of writing this article, there is a dual ISO image file that contains both i686 and x86-64 architectures on one disk. Start your PC with your preferred installation media (CD or USB stick). On most PC systems, you can access the boot menu by pressing one of the function keys, usually between F8 and F12 depending on the motherboard manufacturer. On older machines where you do not yet have a boot menu, you might need to change the boot order in the BIOS where the CD-ROM (or DVD/Blu-ray) has to be chosen as the first device to try booting from. We'll also explain how to use a different keyboard layout than the default one in this recipe. Creating, formatting, and mounting partitions: You can partition the disks the way you want using cfdisk (for MBR disk partitioning) or cgdisk (for GUID disk partitioning). After creating the partitions, we can choose to format our created partitions with specific filesystems. When all partitions are formatted, we need to mount the partitions. First we will mount the root partition to /mnt. The other partitions will be mounted later on after you have created the specific folders. We'll designate our device with /dev/sdX; in your case this can be /dev/sda, and so on. Connecting to the Internet: To be able to continue installing the ISO you need to connect to the Internet, because there are no packages available for installation on the ISO. For a wireless network you will need to use netcfg. When connected to a wired network, just use dhcpcd or dhclient. Installing the base system and boot loader: These days the base system gets installed by running a simple script pacstrap. Pacstrap takes multiple parameters, the target location, and the packages or groups you want to install. For people who want to develop on their machines, the best base install is adding base-devel to the default installation. For normal end users, just base will be sufficient to start. Configuring the system: In this recipe, we'll describe the flow of what to do during the configuration. How to do it... The following steps will guide you in preparing, booting, and setting keyboard layout: Once you have downloaded the ISO image file, you should also verify its integrity by downloading the sha1sums.txt file from the download page. These days you can also check if the ISO is completely valid by verifying the signature of the ISO. Verify the integrity by issuing the sha1sum -c sha1sums.txt command and you'll see whether your download was successful or not. Also check if the signature of the ISO is correct by running gpg -v archlinux-...iso.sig: sha1sum -c sha1sums.txt gpg -v archlinux-2012-08-04-dual.iso.sig The following screenshot shows the execution of this step: As you can see in the previous screenshot, the ISO's checksum is ok and the signature is valid. Now that we are sure our ISO is ok, we can burn this to a CD with our favorite burning program. Insert the CD into the drive, or insert the USB stick into the USB port of your PC. Enter the boot menu, or let your computer automatically boot from the inserted installation media. If the previous steps are performed correctly, you will see the following screenshot: Select the architecture you want and press Enter, and we'll be on our way. Search the keyboard layout desired for your region. The available keyboard layouts can be found at /usr/share/kbd/keymaps/. Set the desired keyboard layout with loadkeys keyboardlayout. Now let's perform the following steps to create, format, and mount partitions: Start cfdisk or cgdisk, having the first parameter as the device you want to partition: cfdisk /dev/sdX cgdisk /dev/sdX Create your partition scheme. Store the partition scheme. Use the mkfs command to create a filesystem on a specific partition: mkfs -t vfat /dev/sdX mkfs.ext4 -L root /dev/sdX Mount your root partition to /mnt: mount /dev/sdX3 /mnt Make directories under mount for your other partitions: mkdir -p /mnt/boot Mount the other partitions: mount /dev/sdX1 /mnt/boot The following steps are needed to connect to the Internet: When we need a wireless network, create a netcfg profile and run netcfg mywireless. Use dhclient or dhcpcd to get an IP address. The following steps should be performed for installing the base system and boot loader: Run pacstrap with the desired parameters: pacstrap /mnt base base-devel Install the desired boot loader: the best choice at this moment is Syslinux. The final installation of the boot loader will be done in a chroot during the initial configuration. We'll now list the steps to do during the configuration: Generate fstab with genfstab: genfstab -p /mnt >> /mnt/etc/fstab Change the root into the system location: arch-chroot /mnt Set your hostname in /etc/hostname. Create /etc/localtime symlink. Set your locale in /etc/locale.conf. Uncomment the configured locale in /etc/locale.gen. Run locale-gen. Configure /etc/mkinitcpio.conf. Generate your initial ramdisk: mkinitcpio -p linux Finish installation of your boot loader. Set the root password with passwd. Leave the chroot environment (exit). How it works... We downloaded the ISO image file via torrent, or via HTTP from the mirror sites listed on the download page. The sha1sum command lets us verify the integrity of the downloaded ISO. On top of the checksum, we can also check the integrity by verifying the signature available for the ISO. So now, we can rest assured that the downloaded file is the real one. The ISO contains a fully working operating system. It also contains all the necessary tools to perform system recovery and installation. The keyboard configuration set with loadkeys will make sure that the key you press on your keyboard will be translated to the correct letter on your screen. Using a different keyboard layout from the one on your physical keyboard might be confusing. We then created a partition scheme on the selected disk with the appropriate tool (cfdisk or cgdisk). Make Filesystem (mkfs) is a unified frontend to create a filesystem. Using it we created our filesystem layout manually under/mnt by creating our default partition layout in our root, and mounting the specific partitions accordingly. You can make a connection with your wireless network (if needed), and then use dhcpcd or dhclient to obtain an IP address that enables you to access the Internet. Pacstrap will run pacman with a modified root location to install the desired packages into the newly created system. For example, installing Syslinux: pacstrap /mnt syslinux The specific configuration files will ensure we don't have to do all those steps over and over again on every boot. Summary This article explained the procedure to get Arch Linux installed on your system using the official installation media. Resources for Article : Further resources on this subject: Compression Formats in Linux Shell Script [Article] Making a Complete yet Small Linux Distribution [Article] Linux Shell Script: Tips and Tricks [Article]
Read more
  • 0
  • 0
  • 40380
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime
article-image-network-programming-gawk
Pavan Ramchandani
31 May 2018
12 min read
Save for later

Network programming 101 with GAWK (GNU AWK)

Pavan Ramchandani
31 May 2018
12 min read
In today's tutorial, we will learn about the networking aspects, for example working with TCP/IP for both client-side and server-side. We will also explore HTTP services to help you get going with networking in AWK. This tutorial is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. The AWK programming language was developed as a pattern-matching language for text manipulation; however, GAWK has advanced features, such as file-like handling of network connections. We can perform simple TCP/IP connection handling in GAWK with the help of special filenames. GAWK extends the two-way I/O mechanism used with the |& operator to simple networking using these special filenames that hide the complex details of socket programming to the programmer. The special filename for network communication is made up of multiple fields, all of which are mandatory. The following is the syntax of creating a filename for network communication: /net-type/protocol/local-port/remote-host/remote-port Each field is separated from another with a forward slash. Specifying all of the fields is mandatory. If any of the field is not valid for any protocol or you want the system to pick a default value for that field, it is set as 0. The following list illustrates the meaning of different fields used in creating the file for network communication: net-type: Its value is inet4 for IPv4, inet6 for IPv6, or inet to use the system default (which is generally IPv4). protocol: It is either tcp or udp for a TCP or UDP IP connection. It is advised you use the TCP protocol for networking. UDP is used when low overhead is a priority. local-port: Its value decides which port on the local machine is used for communication with the remote system. On the client side, its value is generally set to 0 to indicate any free port to be picked up by the system itself. On the server side, its value is other than 0 because the service is provided to a specific publicly known port number or service name, such as http, smtp, and so on. remote-host: It is the remote hostname which is to be at the other end of the connection. For the server side, its value is set to 0 to indicate the server is open for all other hosts for connection. For the client side, its value is fixed to one remote host and hence, it is always different from 0. This name can either be represented through symbols, such as www.google.com, or numbers, 123.45.67.89. remote-port: It is the port on which the remote machine will communicate across the network. For clients, its value is other than 0, to indicate to which port they are connecting to the remote machine. For servers, its value is the port on which they want connection from the client to be established. We can use a service name here such as ftp, http, or a port number such as 80, 21, and so on. TCP client and server (/inet/tcp) TCP gaurantees that data is received at the other end and in the same order as it was transmitted, so always use TCP. In the following example, we will create a tcp-server (sender) to send the current date time of the server to the client. The server uses the strftime() function with the coprocess operator to send to the GAWK server, listening on the 8080 port. The remote host and remote port could be any client, so its value is kept as 0. The server connection is closed by passing the special filename to the close() function for closing the file as follows: $ vi tcpserver.awk #TCP-Server BEGIN { print strftime() |& "/inet/tcp/8080/0/0" close("/inet/tcp/8080/0/0") } Now, open one Terminal and run this program before running the client program as follows: $ awk -f tcpserver.awk Next, we create the tcpclient (receiver) to receive the data sent by the tcpserver. Here, we first create the client connection and pass the received data to the getline() using the coprocess operator. Here the local-port value is set to 0 to be automatically chosen by the system, the remote-host is set to the localhost, and the remote-port is set to the tcp-server port, 8080. After that, the received message is printed, using the print $0 command, and finally, the client connection is closed using the close command, as follows: $ vi tcpclient.awk #TCP-client BEGIN { "/inet/tcp/0/localhost/8080" |& getline print $0 close("/inet/tcp/0/localhost/8080") } Now, execute the tcpclient program in another Terminal as follows : $ awk -f tcpclient.awk The output of the previous code is as follows : Fri Feb 9 09:42:22 IST 2018 UDP client and server ( /inet/udp ) The server and client programs that use the UDP protocol for communication are almost identical to their TCP counterparts, with the only difference being that the protocol is changed to udp from tcp. So, the UDP-server and UDP-client program can be written as follows: $ vi udpserver.awk #UDP-Server BEGIN { print strftime() |& "/inet/udp/8080/0/0" "/inet/udp/8080/0/0" |& getline print $0 close("/inet/udp/8080/0/0") } $ awk -f udpserver.awk Here, only one addition has been made to the client program. In the client, we send the message hello from client ! to the server. So when we execute this program on the receiving Terminal, where the udpclient.awk program is run, we get the remote system date time. And on the Terminal where the udpserver.awk program is run, we get the hello message from the client: $ vi udpclient.awk #UDP-client BEGIN { print "hello from client!" |& "/inet/udp/0/localhost/8080" "/inet/udp/0/localhost/8080" |& getline print $0 close("/inet/udp/0/localhost/8080") } $ awk -f udpclient.awk GAWK can be used to open direct sockets only. Currently, there is no way to access services available over an SSL connection such as https, smtps, pop3s, imaps, and so on. Reading a web page using HttpService To read a web page, we use the Hypertext Transfer Protocol (HTTP ) service which runs on port number 80. First, we redefine the record separators RS and ORS because HTTP requires CR-LF to separate lines. The program requests to the IP address 35.164.82.168 ( www.grymoire.com ) of a static website which, in turn, makes a GET request to the web page: http://35.164.82.168/Unix/donate.html . HTTP calls the GET request, a method which tells the web server to transmit the web page donate.html. The output is stored in the getline function using the co-process operator and printed on the screen, line by line, using the while loop. Finally, we close the http service connection. The following is the program to retrieve the web page: $ vi view_webpage.awk BEGIN { RS=ORS="rn" http = "/inet/tcp/0/35.164.82.168/80" print "GET http://35.164.82.168/Unix/donate.html" |& http while ((http |& getline) > 0) print $0 close(http) } $ awk -f view_webpage.awk Upon executing the program, it fills the screen with the source code of the page on the screen as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML lang="en-US"> <HEAD> <TITLE> Welcome to The UNIX Grymoire!</TITLE> <meta name="keywords" content="grymoire, donate, unix, tutorials, sed, awk"> <META NAME="Description" CONTENT="Please donate to the Unix Grymoire" > <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <link href="myCSS.css" rel="stylesheet" type="text/css"> <!-- Place this tag in your head or just before your close body tag --> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <link rel="canonical" href="http://www.grymoire.com/Unix/donate.html"> <link href="myCSS.css" rel="stylesheet" type="text/css"> ........ ........ Profiling in GAWK Profiling of code is done for code optimization. In GAWK, we can do profiling by supplying a profile option to GAWK while running the GAWK program. On execution of the GAWK program with that option, it creates a file with the name awkprof.out. Since GAWK is performing profiling of the code, the program execution is up to 45% slower than the speed at which GAWK normally executes. Let's understand profiling by looking at some examples. In the following example, we create a program that has four functions; two arithmetic functions, one function prints an array, and one function calls all of them. Our program also contains two BEGIN and two END statements. First, the BEGIN and END statement and then it contains a pattern action rule, then the second BEGIN and END statement, as follows: $ vi codeprof.awk func z_array(){ arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for ( v in arr ) print v, arr[v] print "Array Ends...!" print "=====================" } function mul(num1, num2){ result = num1 * num2 printf ("Multiplication of %d * %d : %dn", num1,num2,result) } function all(){ add(30,10) mul(5,6) z_array() } BEGIN { print "First BEGIN statement" print "=====================" } END { print "First END statement " print "=====================" } /maruti/{print $0 } BEGIN { print "Second BEGIN statement" print "=====================" all() } END { print "Second END statement" print "=====================" } function add(num1, num2){ result = num1 + num2 printf ("Addition of %d + %d : %dn", num1,num2,result) } $ awk -- prof -f codeprof.awk cars.dat The output of the previous code is as follows: First BEGIN statement ===================== Second BEGIN statement ===================== Addition of 30 + 10 : 40 Multiplication of 5 * 6 : 30 Array begins...! ===================== 1 audi 2 bmw 3 ferrari 4 toyota 5 volvo Array Ends...! ===================== maruti swift 2007 50000 5 maruti dezire 2009 3100 6 maruti swift 2009 4100 5 maruti esteem 1997 98000 1 First END statement ===================== Second END statement ===================== Execution of the previous program also creates a file with the name awkprof.out. If we want to create this profile file with a custom name, then we can specify the filename as an argument to the --profile option as follows: $ awk --prof=codeprof.prof -f codeprof.awk cars.dat Now, upon execution of the preceding code we get a new file with the name codeprof.prof. Let's try to understand the contents of the file codeprof.prof created by the profiles as follows: # gawk profile, created Fri Feb 9 11:01:41 2018 # BEGIN rule(s) BEGIN { 1 print "First BEGIN statement" 1 print "=====================" } BEGIN { 1 print "Second BEGIN statement" 1 print "=====================" 1 all() } # Rule(s) 12 /maruti/ { # 4 4 print $0 } # END rule(s) END { 1 print "First END statement " 1 print "=====================" } END { 1 print "Second END statement" 1 print "=====================" } # Functions, listed alphabetically 1 function add(num1, num2) { 1 result = num1 + num2 1 printf "Addition of %d + %d : %dn", num1, num2, result } 1 function all() { 1 add(30, 10) 1 mul(5, 6) 1 z_array() } 1 function mul(num1, num2) { 1 result = num1 * num2 1 printf "Multiplication of %d * %d : %dn", num1, num2, result } 1 function z_array() { 1 arr[30] = "volvo" 1 arr[10] = "bmw" 1 arr[20] = "audi" 1 arr[50] = "toyota" 1 arr["car"] = "ferrari" 1 n = asort(arr) 1 print "Array begins...!" 1 print "=====================" 5 for (v in arr) { 5 print v, arr[v] } 1 print "Array Ends...!" 1 print "=====================" } This profiling example explains the various basic features of profiling in GAWK. They are as follows: The first look at the file from top to bottom explains the order of the program in which various rules are executed. First, the BEGIN rules are listed followed by the BEGINFILE rule, if any. Then pattern-action rules are listed. Thereafter, ENDFILE rules and END rules are printed. Finally, functions are listed in alphabetical order. Multiple BEGIN and END rules retain their places as separate identities. The same is also true for the BEGINFILE and ENDFILE rules. The pattern-action rules have two counts. The first number, to the left of the rule, tells how many times the rule's pattern was tested for the input file/record. The second number, to the right of the rule's opening left brace, with a comment, shows how many times the rule's action was executed when the rule evaluated to true. The difference between the two indicates how many times the rules pattern evaluated to false. If there is an if-else statement then the number shows how many times the condition was tested. At the right of the opening left brace for its body is a count showing how many times the condition was true. The count for the else statement tells how many times the test failed.  The count at the beginning of a loop header (for or while loop) shows how many times the loop conditional-expression was executed. In user-defined functions, the count before the function keyword tells how many times the function was called. The counts next to the statements in the body show how many times those statements were executed. The layout of each block uses C-style tabs for code alignment. Braces are used to mark the opening and closing of a code block, similar to C-style. Parentheses are used as per the precedence rule and the structure of the program, but only when needed. Printf or print statement arguments are enclosed in parentheses, only if the statement is followed by redirection. GAWK also gives leading comments before rules, such as before BEGIN and END rules, BEGINFILE and ENDFILE rules, and pattern-action rules and before functions. GAWK provides standard representation in a profiled version of the program. GAWK also accepts another option, --pretty-print. The following is an example of a pretty-printing AWK program: $ awk --pretty-print -f codeprof.awk cars.dat When GAWK is called with pretty-print, the program generates awkprof.out, but this time without any execution counts in the output. Pretty-print output also preserves any original comments if they are given in a program while the profile option omits the original program’s comments. The file created on execution of the program with --pretty-print option is as follows: # gawk profile, created Fri Feb 9 11:04:19 2018 # BEGIN rule(s) BEGIN { print "First BEGIN statement" print "=====================" } BEGIN { print "Second BEGIN statement" print "=====================" all() } # Rule(s) /maruti/ { print $0 } # END rule(s) END { print "First END statement " print "=====================" } END { print "Second END statement" print "=====================" } # Functions, listed alphabetically function add(num1, num2) { result = num1 + num2 printf "Addition of %d + %d : %dn", num1, num2, result } function all() { add(30, 10) mul(5, 6) z_array() } function mul(num1, num2) { result = num1 * num2 printf "Multiplication of %d * %d : %dn", num1, num2, result } function z_array() { arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for (v in arr) { print v, arr[v] } print "Array Ends...!" print "=====================" } To summarize, we looked at the basics of network programming and GAWK's built-in command line debugger. Do check out the book Learning AWK Programming to know more about the intricacies of AWK programming for text processing. 20 ways to describe programming in 5 words What is Mob Programming?
Read more
  • 0
  • 0
  • 39123

article-image-linux-kernel-announces-a-patch-to-allow-0-0-0-0-8-as-a-valid-address-range
Savia Lobo
15 Jul 2019
6 min read
Save for later

Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range

Savia Lobo
15 Jul 2019
6 min read
Last month, the team behind Linux kernel announced a patch that allows 0.0.0.0/8 as a valid address range. This patch allows for these 16m new IPv4 addresses to appear within a box or on the wire. The aim is to use this 0/8 as a global unicast as this address was never used except the 0.0.0.0. In a post written by Dave Taht, Director of the Make-Wifi-Fast, and committed by David Stephen Miller, an American software developer working on the Linux kernel mentions that the use of 0.0.0.0/8 has been prohibited since the early internet due to two issues. First, an interoperability problem with BSD 4.2 in 1984, which was fixed in BSD 4.3 in 1986. “BSD 4.2 has long since been retired”, the post mentions. The second issue is that addresses of the form 0.x.y.z were initially defined only as a source address in an ICMP datagram, indicating "node number x.y.z on this IPv4 network", by nodes that know their address on their local network, but do not yet know their network prefix, in RFC0792 (page 19). The use of 0.x.y.z was later repealed in RFC1122 because the original ICMP-based mechanism for learning the network prefix was unworkable on many networks such as Ethernet. This is because these networks have longer addresses that would not fit into the 24 "node number" bits. Modern networks use reverse ARP (RFC0903) or BOOTP (RFC0951) or DHCP (RFC2131) to find their full 32-bit address and CIDR netmask (and other parameters such as default gateways). 0.x.y.z has had 16,777,215 addresses in 0.0.0.0/8 space left unused and reserved for future use, since 1989. The whole discussion of using allowing these IP address and making them available started early this year at the NetDevConf 2019, The Technical Conference on Linux Networking. The conference took place in Prague, Czech Republic, from March 20th to 22nd, 2019. One of the sessions, “Potential IPv4 Unicast Expansions”, conducted by  Dave Taht, along with John Gilmore, and Paul Wouters explains how IPv4 success story was in carrying unicast packets worldwide. The speakers say, service sites still need IPv4 addresses for everything, since the majority of Internet client nodes don't yet have IPv6 addresses. IPv4 addresses now cost 15 to 20 dollars apiece (times the size of your network!) and the price is rising. In their keynote, they described, the IPv4 address space includes hundreds of millions of addresses reserved for obscure (the ranges 0/8, and 127/16), or obsolete (225/8-231/8) reasons, or for "future use" (240/4 - otherwise known as class E). They highlighted the fact: “instead of leaving these IP addresses unused, we have started an effort to make them usable, generally. This work stalled out 10 years ago, because IPv6 was going to be universally deployed by now, and reliance on IPv4 was expected to be much lower than it in fact still is”. “We have been reporting bugs and sending patches to various vendors. For Linux, we have patches accepted in the kernel and patches pending for the distributions, routing daemons, and userland tools. Slowly but surely, we are decontaminating these IP addresses so they can be used in the near future. Many routers already handle many of these addresses, or can easily be configured to do so, and so we are working to expand unicast treatment of these addresses in routers and other OSes”, they further mentioned. They said they wanted to carry out an “authorized experiment to route some of these addresses globally, monitor their reachability from different parts of the Internet, and talk to ISPs who are not yet treating them as unicast to update their networks”. Here’s the patch code for 0.0.0.0/8 for Linux: Users have a mixed reaction to this announcement and assumed that these addresses would be unassigned forever. A few are of the opinion that for most business, IPv6 is an unnecessary headache. A user explained the difference between the address ranges in a reply to Jeremy Stretch’s (a network engineer) post, “0.0.0.0/8 - Addresses in this block refer to source hosts on "this" network. Address 0.0.0.0/32 may be used as a source address for this host on this network; other addresses within 0.0.0.0/8 may be used to refer to specified hosts on this network [RFC1700, page 4].” A user on Reddit writes, this announcement will probably get “the same reaction when 1.1.1.1 and 1.0.0.1 became available, and AT&T blocked it 'by accident' or most equipment vendors or major ISP will use 0.0.0.0/8 as a loopback interface or test interface because they never thought it would be assigned to anyone.” Another user on Elegant treader writes, “I could actually see us successfully inventing, and implementing, a multiverse concept for ipv4 to make these 32 bit addresses last another 40 years, as opposed to throwing these non-upgradable, hardcoded v4 devices out”. Another writes, if they would have “taken IPv4 and added more bits - we might all be using IPv6 now”. The user further mentions, “Instead they used the opportunity to cram every feature but the kitchen sink in there, so none of the hardware vendors were interested in implementing it and the backbones were slow to adopt it. So we got mass adoption of NAT instead of mass adoption of IPv6”. A user explains, “A single /8 isn’t going to meaningfully impact the exhaustion issues IPv4 faces. I believe it was APNIC a couple of years ago who said they were already facing allocation requests equivalent to an /8 a month”. “It’s part of the reason hand-wringing over some of the “wasteful” /8s that were handed out to organizations in the early days is largely pointless. Even if you could get those orgs to consolidate and give back large useable ranges in those blocks, there’s simply not enough there to meaningfully change the long term mismatch between demand and supply”, the user further adds. To know about these developments in detail, watch Dave Taht’s keynote video on YouTube: https://www.youtube.com/watch?v=92aNK3ftz6M&feature=youtu.be An attack on SKS Keyserver Network, a write-only program, poisons two high-profile OpenPGP certificates Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more! Amazon adds UDP load balancing support for Network Load Balancer
Read more
  • 0
  • 0
  • 36604

article-image-puppet-server-and-agents
Packt
16 Aug 2017
18 min read
Save for later

Puppet Server and Agents

Packt
16 Aug 2017
18 min read
In this article by Martin Alfke, the author of the book Puppet Essentials - Third Edition, we will following topics: The Puppet server Setting up the Puppet Agent (For more resources related to this topic, see here.) The Puppetserver Many Puppet-based workflows are centered on the server, which is the central source of configuration data and authority. The server hands instructions to all the computer systems in the infrastructure (where agents are installed). It serves multiple purposes in the distributed system of Puppet components. The server will perform the following tasks: Storing manifests and compiling catalogs Serving as the SSL certification authority Processing reports from the agent machines Gathering and storing information about the agents As such, the security of your server machine is paramount. The requirements for hardening are comparable to those of a Kerberos Key Distribution Center. During its first initialization, the Puppet server generates the CA certificate. This self-signed certificate will be distributed among and trusted by all the components of your infrastructure. This is why its private key must be protected very carefully. New agent machines request individual certificates, which are signed with the CA certificate. The terminology around the master software might be a little confusing. That's because both the terms, Puppet Master and Puppet Server, are floating around, and they are closely related too. Let's consider some technological background in order to give you a better understanding of what is what. Puppet's master service mainly comprises a RESTful HTTP API. Agents initiate the HTTPS transactions, with both sides identifying each other using trusted SSL certificates. During the time when Puppet 3 and older versions were current, the HTTPS layer was typically handled by Apache. Puppet's Ruby core was invoked through the Passenger module. This approach offered good stability and scalability. Puppet Inc. has improved upon this standard solution with a specialized software called puppetserver. The Ruby-based core of the master remains basically unchanged, although it now runs on JRuby instead of Ruby's own MRI. The HTTPS layer is run by Jetty, sharing the same Java Virtual Machine with the master. By cutting out some middlemen, puppetserver is faster and more scalable than a Passenger solution. It is also significantly easier to set up. Setting up the server machine Getting the puppetserver software onto a Linux machine is just as simple as the agent package. Packages are available on Red Hat Enterprise Linux and its derivatives,derivatives, Debian and Ubuntu and any other operating system which is supported to run a Puppet server. Until now the Puppet server must run on a Linux based Operating System and can not run on Windows or any other U*ix.U*ix. A great way to get Puppet Inc. packages on any platform is the Puppet Collection. Shortly after the release of Puppet 4, Puppet Inc. created this new way of supplying software. This can be considered as a distribution in its own right. Unlike Linux distributions, it does not contain a Kernel, system tools, and libraries. Instead, it comprises various software from the Puppet ecosystem. Software versions that are available from the same Puppet Collection are guaranteed to work well together. Use the following commands to install puppetserver from the first Puppet Collection (PC1) on a Debian 7 machine. (The Collection for Debian 8 has not yet received a puppetserver package at the time of writing this.) root@puppetmaster# wget http://apt.puppetlabs.com/puppetlabs-release-pc1-jessie.debhttp://apt.puppetlabs.com/puppetlabs-release-pc1-jessie.deb root@puppetmaster# dpkg -i puppetlabs-release-pc1-jessie.debpuppetlabs-release-pc1-jessie.deb root@puppetmaster# apt-get update root@puppetmaster# apt-get install puppetserver The puppetserver package comprises only the Jetty server and the Clojure API, but the all-in-one puppet-agent package is pulled in as a dependency. The package name, puppet-agent, is misleading. This AIO package contains all the parts of Puppet including the master core, a vendored Ruby build, and the several pieces of additional software. Specifically, you can use the puppet command on the master node. You will soon learn how this is useful. However, when using the packages from Puppet Labs, everything gets installed under /opt/puppetlabs. It is advisable to make sure that your PATH variable always includes the /opt/puppetlabs/bin directory so that the puppet command is found here. Regardless of this, once the puppetserver package is installed, you can start the master service: root@puppetmaster# systemctl start puppetserver Depending on the power of your machine, the startup can take a few minutes. Once initialization completes, the server will operate very smoothly though. As soon as the master port 8140 is open, your Puppet master is ready to serve requests. If the service fails to start, there might be an issue with certificate generation. (We observed such issues with some versions of the software.) Check the log file at /var/log/puppetlabs/puppetserver/puppetserver-daemon.log. If it indicates that there are problems while looking up its certificate file, you can work around the problem by temporarily running a standalone master as follows: puppet master –-no-daemonize. After initialization, you can stop this process. The certificate is available now, and puppetserver should now be able to start as well. Another reason for start failures is insufficient amount of memory. The Puppet server process needs 2 GB of memory. Creating the master manifest The master compiles manifests for many machines, but the agent does not get to choose which source file is to be used—this is completely at the master's discretion. The starting point for any compilation by the master is always the site manifest, which can be found in /opt/puppetlabs/code/environments/production/manifests/. Each connecting agent will use all the manifests found here. Of course, you don't want to manage only one identical set of resources on all your machines. To define a piece of manifest exclusively for a specific agent, put it in a node block. This block's contents will only be considered when the calling agent has a matching common name in its SSL certificate. You can dedicate a piece of the manifest to a machine with the name of agent, for example: node 'agent' { $packages = [ 'apache2', 'libapache2-mod-php5', 'libapache2-mod-passenger', ] package { $packages: ensure => 'installed', before => Service['apache2'], } service { 'apache2': ensure => 'running', enable => true, } } Before you set up and connect your first agent to the master, step back and think about how the master should be addressed. By default, agents will try to resolve the unqualified puppet hostname in order to get the master's address. If you have a default domain that is being searched by your machines, you can use this as a default and add a record for puppet as a subdomain (such as puppet.example.net). Otherwise, pick a domain name that seems fitting to you, such as master.example.net or adm01.example.net. What's important is the following: All your agent machines can resolve the name to an address The master process is listening for connections on that address The master uses a certificate with the chosen name as CN or DNS Alt Names The mode of resolution depends on your circumstances—the hosts file on each machine is one ubiquitous possibility. The Puppet server listens on all the available addresses by default. This leaves the task of creating a suitable certificate, which is simple. Configure the master to use the appropriate certificate name and restart the service. If the certificate does not exist yet, Puppet will take the necessary steps to create it. Put the following setting into your /etc/puppetlabs/puppet/puppet.conf file on the master machine: [main] [main] certname=puppetmaster.example.net In Puppet versions before 4.0, the default location for the configuration file is /etc/puppet/puppet.conf. Upon its next start, the master will use the appropriate certificate for all SSL connections. The automatic proliferation of SSL data is not dangerous even in an existing setup, except for the certification authority. If the master were to generate a new CA certificate at any point in time, it would break the trust of all existing agents. Make sure that the CA data is neither lost nor compromised. All previously signed certificates become obsolete whenever Puppet needs to create a new certification authority. The default storage location is /etc/puppetlabs/puppet/ssl/ca for Puppet 4.0 and higher, and /var/lib/puppet/ssl/ca for older versions. Inspecting the configuration settings All the customization of the master's parameters can be made in the puppet.conf file. The operating system packages ship with some settings that are deemed sensible by the respective maintainers. Apart from these explicit settings, Puppet relies on defaults that are either built-in or derived from the : root@puppetmaster # puppet master --configprint manifest /etc/puppetlabs/code/environments/production/manifests Most users will want to rely on these defaults for as many settings as possible. This is possible without any drawbacks because Puppet makes all settings fully transparent using the --configprint parameter. For example, you can find out where the master manifest files are located. To get an overview of all available settings and their values, use the following command: root@puppetmaster# puppet master --configprint all | less While this command is especially useful on the master side, the same introspection is available for puppet apply and puppet agent.  Setting specific configuration entries is possible with the puppet config command: root@puppetmaster # puppet config set –-section main certname puppetmaster.example.net Setting up the Puppet agent As was explained earlier, the master mainly serves instructions to agents in the form of catalogs that are compiled from the manifest. You have also prepared a node block for your first agent in the master manifest. The plain Puppet package that allows you to apply a local manifest contains all the required parts in order to operate a proper agent. If you are using Puppet Labs packages, you need not install the puppetserver package. Just get puppet-agent instead. After a successful package installation, one needs to specify where Puppet agent can find the puppetserver: root@puppetmaster # puppet config set –-section agent server puppetmaster.example.net Afterwards the following invocation is sufficient for an initial test: root@agent# puppet agent --test Info: Creating a new SSL key for agent Error: Could not request certificate: getaddrinfo: Name or service not known Exiting; failed to retrieve certificate and waitforcert is disabled Puppet first created a new SSL certificate key for itself. For its own name, it picked agent, which is the machine's hostname. That's fine for now. An error occurred because the puppet name cannot be currently resolved to anything. Add this to /etc/hosts so that Puppet can contact the master: root@agent# puppet agent --test Info: Caching certificate for ca Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for agent Info: Certificate Request fingerprint (SHA256): 52:65:AE:24:5E:2A:C6:17:E2:5D:0A:C9: 86:E3:52:44:A2:EC:55:AE:3D:40:A9:F6:E1:28:31:50:FC:8E:80:69 Error: Could not request certificate: Error 500 on SERVER: Internal Server Error: java.io.FileNotFoundException: /etc/puppetlabs/puppet/ssl/ca/requests/agent.pem (Permission denied) Exiting; failed to retrieve certificate and waitforcert is disabled Note how Puppet conveniently downloaded and cached the CA certificate. The agent will establish trust based on this certificate from now on. Puppet created a certificate request and sent it to the master. It then immediately tried to download the signed certificate. This is expected to fail—the master won't just sign a certificate for any request it receives. This behavior is important for proper security. There is a configuration setting that enables such automatic signing, but users are generally discouraged from using this setting because it allows the creation of arbitrary numbers of signed (and therefore, trusted) certificates to any user who has network access to the master. To authorize the agent, look for the CSR on the master using the puppet cert command: root@puppetmaster# puppet cert --list "agent" (SHA256) 52:65:AE:24:5E:2A:C6:17:E2:5D:0A:C9:86:E3:52:44:A2:EC:55:AE: 3D:40:A9:F6:E1:28:31:50:FC:8E:80:69 This looks alright, so now you can sign a new certificate for the agent: root@puppetmaster# puppet cert --sign agent Notice: Signed certificate request for agent Notice: Removing file Puppet::SSL::CertificateRequest agent at '/etc/puppetlabs/ puppet/ssl/ca/requests/agent.pem' When choosing the action for puppet cert, the dashes in front of the option name can be omitted—you can just use puppet cert list and puppet cert sign. Now the agent can receive its certificate for its catalog run as follows: root@agent# puppet agent --test Info: Caching certificate for agent Info: Caching certificate_revocation_list for ca Info: Caching certificate for agent Info: Retrieving pluginfacts Info: Retrieving plugin Info: Caching catalog for agent Info: Applying configuration version '1437065761' Notice: Applied catalog in 0.11 seconds The agent is now fully operational. It received a catalog and applied all resources within. Before you read on to learn how the agent usually operates, there is a note that is especially important for the users of Puppet 3. Since this is the common name in the master's certificate, the preceding command will not even work with a Puppet 3.x master. It works with puppetserver and Puppet 4 because the default puppet name is now included in the certificate's Subject Alternative Names by default. It is tidier to not rely on this alias name, though. After all, in production, you will probably want to make sure that the master has a fully qualified name that can be resolved, at least inside your network. You should therefore, add the following to the main section of puppet.conf on each agent machine: [agent] [agent] server=master.example.net In the absence of DNS to resolve this name, your agent will need an appropriate entry in its hosts file or a similar alternative way of address resolution. These steps are necessary in a Puppet 3.x setup. If you have been following along with a Puppet 4 agent, you might notice that after this change, it generates a new Certificate Signing Request: root@agent# puppet agent –test Info: Creating a new SSL key for agent.example.net Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for agent.example.net Info: Certificate Request fingerprint (SHA256): 85:AC:3E:D7:6E:16:62:BD:28:15:B6:18: 12:8E:5D:1C:4E:DE:DF:C3:4E:8F:3E:20:78:1B:79:47:AE:36:98:FD Exiting; no certificate found and waitforcert is disabled If this happens, you will have to use puppet cert sign on the master again.  The agent will then retrieve a new certificate. The agent's life cycle In a Puppet-centric workflow, you typically want all changes to the configuration of servers (perhaps even workstations) to originate on the Puppet master and propagate to the agents automatically. Each new machine gets integrated into the Puppet infrastructure with the master at its center and gets removed during the decommissioning, as shown in the following diagram: Insert image 6566_02_01.png The very first step—generating a key and a certificate signing request—is always performed implicitly and automatically at the start of an agent run if no local SSL data exists yet. Puppet creates the required data if no appropriate files are found. There will be a short description on how to trigger this behavior manually later in this section. The next step is usually the signing of the agent's certificate, which is performed on the master. It is a good practice to monitor the pending requests by listing them on the console: root@puppetmaster# puppet cert list oot@puppetmaster# puppet cert sign '<agent fqdn>' From this point on, the agent will periodically check with the master to load updated catalogs. The default interval for this is 30 minutes. The agent will perform a run of a catalog each time and check the sync state of all the contained resources. The run is performed for unchanged catalogs as well, because the sync states can change between runs. Before you manage to sign the certificate, the agent process will query the master in short intervals for a while. This can avoid a 30 minute delay if the certificate is not ready, right when the agent starts up. Launching this background process can be done manually through a simple command: root@agent# puppet agent However, it is preferable to do this through the puppet system service. When an agent machine is taken out of active service, its certificate should be invalidated. As is customary with SSL, this is done through revocation. The master adds the serial number of the certificate to its certificate revocation list. This list, too, is shared with each agent machine. Revocation is initiated on the master through the puppet cert command: root@puppetmaster# puppet cert revoke agent The updated CRL is not honored until the master service is restarted. If security is a concern, this step must not be postponed. The agent can then no longer use its old certificate: root@agent# puppet agent --test Warning: Unable to fetch my node definition, but the agent run will continue: Warning: SSL_connect SYSCALL returned=5 errno=0 state=unknown state [...] Error: Could not retrieve catalog from remote server: SSL_connect SYSCALL returned=5 errno=0 state=unknown state [...] Renewing an agent's certificate Sometimes, it is necessary during an agent machine's life cycle to regenerate its certificate and related data—the reasons can include data loss, human error, or certificate expiration, among others. Performing the regeneration is quite simple: all relevant files are kept at /etc/puppetlabs/puppet/ssl (for Puppet 3.x, this is /var/lib/puppet/ssl) on the agent machine. Once these files are removed (or rather, the whole ssl/ directory tree), Puppet will renew everything on the next agent run. Of course, a new certificate must be signed. This requires some preparation—just initiating the request from the agent will fail: root@agent# puppet agent –test Info: Creating a new SSL key for agent Info: Caching certificate for ca Info: Caching certificate for agent.example.net Error: Could not request certificate: The certificate retrievedfrom the master does not match the agent's private key. Certificate fingerprint: 6A:9F:12:C8:75:C0:B6:10:45:ED:C3:97:24:CC:98:F2:B6:1A:B5: 4C:E3:98:96:4F:DA:CD:5B:59:E0:7F:F5:E6 The master still has the old certificate cached. This is a simple protection against the impersonation of your agents by unauthorized entities. To fix this, remove the certificate from both the master and the agent and then start a Puppet run, which will automatically regenerate a certificate. On the master: puppet cert clean agent.example.net On the agent: On most platforms: find /etc/puppetlabs/puppet/ssl -name agent.example.net.pem –delete On Windows: del "/etc/puppetlabs/puppet/ssl/agent.example.net.pem" /f puppet agent –t Exiting; failed to retrieve certificate and waitforcert is disabled Once you perform the cleanup operation on the master, as advised in the preceding output, and remove the indicated file from the agent machine, the agent will be able to successfully place its new CSR: root@puppetmaster# puppet cert clean agent Notice: Revoked certificate with serial 18 Notice: Removing file Puppet::SSL::Certificate agent at '/etc/puppetlabs/ puppet/ssl/ca/signed/agent.pem' Removing file Puppet::SSL::Certificate agent at '/etc/puppetlabs/ puppet/ssl/certs/agent.pem'. The rest of the process is identical to the original certificate creation. The agent uploads its CSR to the master, where the certificate is created through the puppet cert sign command. Running the agent from cron There is an alternative way to operate the agent. We covered starting one long-running puppet agent process that does its work in set intervals and then goes back to sleep. However, it is also possible to have cron launch a discrete agent process in the same interval. This agent will contact the master once, run the received catalog, and then terminate. This has several advantages as follows: The agent operating system saves some resources The interval is precise and not subject to skew (when running the background agent, deviations result from the time that elapses during the catalog run), and distributed interval skew can lead to thundering herd effects Any agent crash or an inadvertent termination is not fatal Setting Puppet to run the agent from cron is also very easy to do—with Puppet! You can use a manifest such as the following: service { 'puppet': enable => false } cron { 'puppet-agent-run': user => 'root', command => 'puppet agent --no-daemonize --onetime --logdest=syslog', minute => fqdn_rand(60), hour => absent, } The fqdn_rand function computes a distinct minute for each of your agents. Setting the hour property to absent means that the job should run every hour. Summary In this article we have learned about the Puppet server and also learned how to setting up the Puppet agent.  Resources for Article: Further resources on this subject: Quick start – Using the core Puppet resource types [article] Understanding the Puppet Resources [article] Puppet: Integrating External Tools [article]
Read more
  • 0
  • 0
  • 36194

article-image-initial-configuration-sco-2016
Packt
17 Jul 2017
13 min read
Save for later

Initial Configuration of SCO 2016

Packt
17 Jul 2017
13 min read
In this article by Michael Seidl, author of the book Microsoft System Center 2016 Orchestrator Cookbook - Second Edition, will show you how to setup Orchestrator Environment and how to deploy and configure Orchestrator Integration Packs. (For more resources related to this topic, see here.) Deploying an additional Runbook designer Runbook designer is the key feature to build your Runbooks. After the initial installation, Runbook designer is installed on the server. For your daily work with orchestrator and Runbooks, you would like to install the Runbook designer on your client or on admin server. We will go through these steps in this recipe. Getting ready You must review the planning the Orchestrator deployment recipe before performing the steps in this recipe. There are a number of dependencies in the planning recipe you must perform in order to successfully complete the tasks in this recipe. You must install a management server before you can install the additional Runbook Designers. The user account performing the installation has administrative privileges on the server nominated for the SCO deployment and must also be a member of OrchestratorUsersGroup or equivalent rights. The example deployment in this recipe is based on the following configuration details: Management server called TLSCO01 with a remote database is already installed System Center 2016 Orchestrator How to do it... The Runbook designer is used to build Runbooks using standard activities and or integration pack activities. The designer can be installed on either a server class operating system or a client class operating system. Follow these steps to deploy an additional Runbook Designer using the deployment manager: Install a supported operating system and join the active directory domain in scope of the SCO deployment. In this recipe the operating system is Windows 10. Ensure you configure the allowed ports and services if the local firewall is enabled for the domain profile. See the following link for details: https://technet.microsoft.com/en-us/library/hh420382(v=sc.12).aspx. Log in to the SCO Management server with a user account with SCO administrative rights. Launch System Center 2016 Orchestrator Deployment Manager: Right-click on Runbook designers, and select Deploy new Runbook Designer: Click on Next on the welcome page. Type the computer name in the Computer field and click on Add. Click on Next. On the Deploy Integration Packs or Hotfixes page check all the integration packs required by the user of the Runbook designer (for this example we will select the AD IP). Click on Next. Click on Finish to begin the installation using the Deployment Manager. How it works... The Deployment Manager is a great option for scaling out your Runbook Servers and also for distributing the Runbook Designer without the need for the installation media. In both cases the Deployment Manager connects to the Management Server and the database server to configure the necessary settings. On the target system the deployment manager installs the required binaries and optionally deploys the integration packs selected. Using the Deployment Manager provides a consistent and coordinated approach to scaling out the components of a SCO deployment. See also The following official web link is a great source of the most up to date information on SCO: https://docs.microsoft.com/en-us/system-center/orchestrator/ Registering an SCO Integration Pack Microsoft System Center 2016 Orchestrator (SCO) automation is driven by process automation components. These process automation components are similar in concept to a physical toolbox. In a toolbox you typically have different types of tools which enable you to build what you desire. In the context of SCO these tools are known as Activities. Activities fall into two main categories: Built-in Standard Activities: These are the default activity categories available to you in the Runbook Designer. The standard activities on their own provide you with a set of components to create very powerful Runbooks. Integration Pack Activities: Integration Pack Activities are provided either by Microsoft, the community, solution integration organizations, or are custom created by using the Orchestrator Integration Pack Toolkit. These activities provide you with the Runbook components to interface with the target environment of the IP. For example, the Active Directory IP has the activities you can perform in the target Active Directory environment. This recipe provides the steps to find and register the second type of activities into your default implementation of SCO. Getting ready You must download the Integration Pack(s) you plan to deploy from the provider of the IP. In this example we will be deploying the Active Directory IP, which can be found at the following link: https://www.microsoft.com/en-us/download/details.aspx?id=54098. You must have deployed a System Center 2016 Orchestrator environment and have full administrative rights in the environment. How to do it... The following diagram provides a visual summary and order of the tasks you need to perform to complete this recipe: We will deploy the Microsoft Active Directory (AD) integration pack (IP). Integration pack organization A good practice is to create a folder structure for your integration packs. The folders should reflect versions of the IPs for logical grouping and management. The version of the IP will be visible in the console and as such you must perform this step after you have performed the step to load the IP(s). This approach will aid in change management when updating IPs in multiple environments. Follow these steps to deploy the Active Directory integration pack. Identify the source location for the Integration Pack in scope (for example, the AD IP for SCO2016). Download the IP to a local directory on the Management Server or UNC share. Log in to the SCO Management server. Launch the Deployment Manager: Under Orchestrator Management Server, right-click on Integration Packs. Select Register IP with the Orchestrator Management Server: Click on Next on the welcome page. Click on Add on the Select Integration Packs or Hotfixes page. Navigate to the directory where the target IP is located, click on Open, and then click on Next. Click on Finish . Click on Accept on End-User License Agreement to complete the registration. Click on Refresh to validate if the IP has successfully been registered. How it works... The process of loading an integration pack is simple. The prerequisite for successfully registering the IP (loading) is ensuring you have downloaded a supported IP to a location accessible to the SCO management server. Additionally the person performing the registration must be a SCO administrator. At this point we have registered the Integration Pack to our Deployment Wizard, 2 Steps are still necessary before we can use the Integration Pack, see our following Recipe for this. There's more... Registering the IP is the first part of the process of making the IP activities available to Runbook designers and Runbook Servers. The next Step has to be the Deployment of Integration Packs to Runbook Designer. See the next Recipe for that. Orchestrator Integration Packs are provided not only by Microsoft, also third party Companies like Cisco or NetAPP are providing OIP’s for their Products. Additionally there is a huge Community which are providing Orchestrator Integration Packs. There are several Sources of downloading Integration Packs, here are two useful links: http://www.techguy.at/liste-mit-integration-packs-fuer-system-center-orchestrator/ http://scorch.codeplex.com/ https://www.microsoft.com/en-us/download/details.aspx?id=54098 Deploying the IP to designers and Runbook servers Registering the Orchestrator Integration Pack is only the first step, you also need to deploy the OIP to your Designer or Runbook Server. Getting Ready You have to follow the steps described in Recipe Registering an SCO Integration Pack before you can start with the next steps to deploy an OIP. How to do it In our example we will deploy the Active Direcgtory Integration Pack to our Runbooks Desginer. Follow these steps to deploy the Active Directory integration pack. Once the IP in scope (AD IP in our example) has successfully been registered, follow these steps to deploy it to the Runbook Designers and Runbook Servers. Log in to the SCO Management server and launch Deployment Manager: Under Orchestrator Management Server, right-click on the Integration Pack in scope and select Deploy IP to Runbook Server or Runbook Designer: Click on Next on the welcome page, select the IP you would like to deploy (in our example, System Center Integration Pack for Active Directory ,  and then click on Next. On the computer Selection page. Type the name of the Runbook Server or designer  in scope and click on Add (repeat for all servers in the scope).  On the Installation Options page you have the following three options: Schedule the Installation: select this option if you want to schedule the deployment for a specific time. You still have to select one of the next two options. Stop all running Runbooks before installing the Integration Packs  or Hotfixes: This option will as described stop all current Runbooks in the environment. Install the Integration Packs or Hotfixes without stopping the running Runbooks: This is the preferred option if you want to have a controlled deployment without impacting current jobs: Click on Next after making your installation option selection. Click on Finish The integration pack will be deployed to all selected designers and Runbook servers. You must close all Runbook designer consoles and re-launch to see the newly deployed Integration Pack. How it works… The process of deploying an integration pack is simple. The pre-requisite for successfully deploying the IP (loading) is ensuring you have registered a supported IP in the SCO management server. Now we have successfully deployed an Orchestrator Integration Pack. If you have deployed it to a Runbook designer, make sure you close and reopen the designer to be able to use the activities in this Integration Pack. Now your are able to use these activities to build your Runbooks, the only thing you have to do, is to follow our next recipe and configure this Integration Pack. This steps can be used for each single Integration Pack, also deploy multiple OIP with one deployment. There’s more… You have to deploy an OIP to every single Designer and Runbook Server, where you want to work with the Activities. Doesn’t matter if you want to edit a Runbook with the Designer or want to run a Runbook on a special Runbook Server, the OIP has to be deployed to both. With Orchestrator Deployment Manager, this is a easy task to do. Initial Integration Pack configuration This recipe provides the steps required to configure an integration pack for use once it has been successfully deployed to a Runbook designer. Getting ready You must deploy an Orchestrator environment and also deploy the IP you plan to configure to a Runbook designer before following the steps in this recipe. The authors assume the user account performing the installation has administrative privileges on the server nominated for the SCO Runbook designer. How to do it... Each integration pack serves as an interface to the actions SCO can perform in the target environment. In our example we will be focusing on the Active Directory connector. We will have two accounts under two categories of AD tasks in our scenario: IP name Category of actions Account name Active Directory Domain Account Management SCOAD_ACCMGT Active Directory Domain Administrator Management SCOAD_DOMADMIN The following diagram provides a visual summary and order of the tasks you need to perform to complete this recipe: Follow these steps to complete the configuration of the Active Directory IP options in the Runbook Designer: Create or identify an existing account for the IP tasks. In our example we are using two accounts to represent two personas of a typical active directory delegation model. SCOAD_ACCMGT is an account with the rights to perform account management tasks only and SCOAD_DOMADMIN is a domain admin account for elevated tasks in Active Directory. Launch the Runbook Designer as a SCO administrator, select Options from the menu bar, and select the IP to configure (in our example, Active Directory). Click on Add, type AD Account Management in the Name: field, select Microsoft Active Directory Domain Configuration in the Type field by clicking on the. In the Properties section type the following: Configuration User Name: SCOAD_ACCMGT Configuration Password: Enter the password for SCOAD_ACCMGT Configuration Domain Controller Name (FQDN): The FQDN of an accessible domain controller in the target AD (In this example, TLDC01.TRUSTLAB.LOCAL). Configuration Default Parent Container: This is an optional field. Leave it blank: Click on OK. Repeat steps 3 and 4 for the Domain Admin account and click on Finish to complete the configuration. How it works... The IP configuration is unique for each system environment SCO interfaces with for the tasks in scope of automation. The active directory IP configuration grants SCO the rights to perform the actions specified in the Runbook using the activities of the IP. Typical Active Directory activities include, but are not limited to creating user and computer accounts, moving user and computer accounts into organizational units, or deleting user and computer accounts. In our example we created two connection account configurations for the following reasons: Follow the guidance of scoping automation to the rights of the manual processes. If we use the example of a Runbook for creating user accounts we do not need domain admin access. A service desk user performing the same action manually would typically be granted only account management rights in AD. We have more flexibility with delegating management and access to Runbooks. Runbooks with elevated rights through the connection configuration can be separated from Runbooks with lower rights using folder security. The configuration requires planning and understanding of its implication before implementing. Each IP has its own unique options which you must specify before you create Runbooks using the specified IP. The default IPs that you can download from Microsoft include the documentation on the properties you must set. There’s more… As you have seen in this recipe, we need to configure each additional Integration Pack with a Connections String, User and Password. The built in Activities from SCO, are using the Service Account rights to perform this Actions, or you can configure a different User for most of the built in Activities.  See also The official online documentation for Microsoft Integration Packs is updated regularly and should be a point for reference at https://www.microsoft.com/en-us/download/details.aspx?id=54098 The creating and maintaining a security model for Orchestrator in this article expands further on the delegation model in SCO. Summary In this article, we have covered the following: Deploying an Additional Runbook Designer Registering an SCO Integration Pack Deploying an SCO Integration Pack to Runbook Designer and Server Initial Integration Pack Configuration Resources for Article: Further resources on this subject: Deploying the Orchestrator Appliance [article] Unpacking System Center 2012 Orchestrator [article] Planning a Compliance Program in Microsoft System Center 2012 [article]
Read more
  • 0
  • 0
  • 35877
article-image-thread-execution
Packt
10 Jul 2017
6 min read
Save for later

Thread of Execution

Packt
10 Jul 2017
6 min read
In this article by Anton Polukhin Alekseevic, the author of the book Boost C++ Application Development Cookbook - Second Edition, we will see the multithreading concept.  Multithreading means multiple threads of execution exist within a single process. Threads may share process resources and have their own resources. Those threads of execution may run independently on different CPUs, leading to faster and more responsible programs. Let's see how to create a thread of execution. (For more resources related to this topic, see here.) Creating a thread of execution On modern multicore compilers, to achieve maximal performance (or just to provide a good user experience), programs usually must use multiple threads of execution. Here is a motivating example in which we need to create and fill a big file in a thread that draws the user interface: #include <algorithm> #include <fstream> #include <iterator> bool is_first_run(); // Function that executes for a long time. void fill_file(char fill_char, std::size_t size, const char* filename); // ... // Somewhere in thread that draws a user interface: if (is_first_run()) { // This will be executing for a long time during which // users interface freezes. fill_file(0, 8 * 1024 * 1024, "save_file.txt"); } Getting ready This recipe requires knowledge of boost::bind or std::bind. How to do it... Starting a thread of execution never was so easy: #include <boost/thread.hpp> // ... // Somewhere in thread that draws a user interface: if (is_first_run()) { boost::thread(boost::bind( &fill_file, 0, 8 * 1024 * 1024, "save_file.txt" )).detach(); } How it works... The boost::thread variable accepts a functional object that can be called without parameters (we provided one using boost::bind) and creates a separate thread of execution. That functional object is copied into a constructed thread of execution and run there. We are using version 4 of the Boost.Thread in all recipes (defined BOOST_THREAD_VERSION to 4). Important differences between Boost.Thread versions are highlighted. After that, we call the detach() function, which does the following: The thread of execution is detached from the boost::thread variable but continues its execution The boost::thread variable start to hold a Not-A-Thread state Note that without a call to detach(), the destructor of boost::thread will notice that it still holds a OS thread and will call std::terminate.  std::terminate terminates our program without calling destructors, freeing up resources, and without other cleanup. Default constructed threads also have a Not-A-Thread state, and they do not create a separate thread of execution. There's more... What if we want to make sure that file was created and written before doing some other job? In that case we need to join the thread in the following way: // ... // Somewhere in thread that draws a user interface: if (is_first_run()) { boost::thread t(boost::bind( &fill_file, 0, 8 * 1024 * 1024, "save_file.txt" )); // Do some work. // ... // Waiting for thread to finish. t.join(); } After the thread is joined, the boost::thread variable holds a Not-A-Thread state and its destructor does not call std::terminate. Remember that the thread must be joined or detached before its destructor is called. Otherwise, your program will terminate! With BOOST_THREAD_VERSION=2 defined, the destructor of boost::thread calls detach(), which does not lead to std::terminate. But doing so breaks compatibility with std::thread, and some day, when your project is moving to the C++ standard library threads or when BOOST_THREAD_VERSION=2 won't be supported; this will give you a lot of surprises. Version 4 of Boost.Thread is more explicit and strong, which is usually preferable in C++ language. Beware that std::terminate() is called when any exception that is not of type boost::thread_interrupted leaves boundary of the functional object that was passed to the boost::thread constructor. There is a very helpful wrapper that works as a RAII wrapper around the thread and allows you to emulate the BOOST_THREAD_VERSION=2 behavior; it is called boost::scoped_thread<T>, where T can be one of the following classes: boost::interrupt_and_join_if_joinable: To interrupt and join thread at destruction boost::join_if_joinable: To join a thread at destruction boost::detach: To detach a thread at destruction Here is a small example: #include <boost/thread/scoped_thread.hpp> void some_func(); void example_with_raii() { boost::scoped_thread<boost::join_if_joinable> t( boost::thread(&some_func) ); // 't' will be joined at scope exit. } The boost::thread class was accepted as a part of the C++11 standard and you can find it in the <thread> header in the std:: namespace. There is no big difference between the Boost's version 4 and C++11 standard library versions of the thread class. However, boost::thread is available on the C++03 compilers, so its usage is more versatile. There is a very good reason for calling std::terminate instead of joining by default! C and C++ languages are often used in life critical software. Such software is controlled by other software, called watchdog. Those watchdogs can easily detect that application has terminated but not always can detect deadlocks or detect them with bigger delays. For example for a defibrillator software it's safer to terminate, than hang on join() for a few seconds waiting for a watchdog reaction. Keep that in mind while designing such applications. See also All the recipes in this chapter are using Boost.Thread. You may continue reading to get more information about the library. The official documentation has a full list of the boost::thread methods and remarks about their availability in the C++11 standard library. The official documentation can be found at http://boost.org/libs/thread. The Interrupting a thread recipe will give you an idea of what the boost::interrupt_and_join_if_joinable class does. Summary We saw how to create a thread of execution using some easy techniques. Resources for Article: Further resources on this subject: Introducing the Boost C++ Libraries [article] Boost.Asio C++ Network Programming [article] Application Development in Visual C++ - The Tetris Application [article]
Read more
  • 0
  • 0
  • 34258

article-image-getting-started-with-fortigate-troubleshooting
Packt
20 Nov 2013
6 min read
Save for later

Getting Started with Fortigate: Troubleshooting

Packt
20 Nov 2013
6 min read
Base system diagnostics The status screen in the web-based manager includes a high level overview of information such as the system time (that is important, for example, to have coherent error messages and log recording), CPU and memory usage, license information, and alerts, as we can see in the following screenshot: Although this screen is useful for a rapid assessment of the situation, our diagnostic tools usually have to dig deeper. The first base command we will use in the CLI is get system. This command can open more than eighty information options, dedicated to the different features of the FortiGate units. Among the others, we are able to check counters related to performance, such as: Startup configuration errors with the get system startup-error-log command. Firewall traffic statistics related to the traffic with the get system performance firewall statistics command. Firewall packet distribution statistics with the get system performance firewall packet-distribution command. Information about the most intensive CPU processes with the get system performance top, that will show a screen divided in columns, as we can see in the following screenshot: Another fundamental command we will use is diagnose hardware, which is used for problem-solving procedures related to certificates, devices, PCI, and system information. The devices menu is opened with the diagnose hardware deviceinfo, and includes a disk option to recover information about internal disks (if present) and a nic option to display data from network interfaces. The latter also shows on screen the errors and the drops related to network packets, as we can see in the following screenshot: To have access to real-time information, we will use the diagnose debug command. The diagnose debug report is not a troubleshooting tool, but is used to create a report for the Fortinet technical support. We will talk about additional options for the diagnose debug command later, in relation to TCP/IP debugging. Troubleshooting routing The tools that we will see in the following paragraphs will be required to troubleshoot the addressing and routing features of the TCP/IP protocol. Before we proceed to explain the single tools and commands for troubleshooting, we can take advantage of a real-world suggestion. In order to perform the troubleshooting steps in a more comfortable way, it is often advisable to use a client for SSH and Telnet such as PuTTY (http://bit.ly/1kyS98), to launch two separate sessions on a FortiGate unit. One of the two consoles will be dedicated to watch the results of the debug commands. The second console will be dedicated to launch commands, such as ping and traceroute that we will use to trigger actions that will be visible in the first open console. In the following screenshot we have a diagnose sniffer packet port1 icmp command running on the session opened to the left-hand side and an execute ping command on the session opened on the right-hand side window: Layer 2 and layer 3 TCP/IP diagnostics Some issues can be solved only by correcting the ARP table that associates IP and MAC addresses. The diagnose ip arp list command shows the ARP cache as shown in the following screenshot: The following commands are used to manage the ARP cache: The execute clear system arp table command to remove the ARP cache. The diagnose ip arp delete <interface name> <IP address> command to remove a single ARP entry. The diagnose ip arp flush <interface name> command to remove all entries associated with a single interface. The config system arp-table command to add a static ARP entry. This command requires two further commands: The config system arp-table command The edit command to create a new entry and to modify an existing entry or to create a new one Three mandatory parameters are: set mac, to configure a MAC address for the entry set ip, to configure an IP address for the entry set interface, to select the interface that is connected to the MAC and IP In the following screenshot we can see all the required steps to add the entry number 3 on our ARP cache with the following parameters: ip 192.168.12.1 with a mac F0:DE:F1:E4:75:B9 on the internal interface: We can now take care of layer 3, especially from the point of view of routing. As in any device that manages networking, the most used command (included in the ICMP protocol) is the ping command. A FortiGate unit supports two kinds of ping commands: execute ping <IP address> and a command dedicated to modify the behavior of the ping command, execute ping-options, that includes parameters such as: data-size: To select the datagram size in bytes (between 0 and 65507) interval: To set a value in seconds between two pings repeat-count: To select the number of pings to send source: To specify a source interface (default value is auto-select) view-settings: Used to show the current ping options timeout: To specify time out in seconds In the following screenshot we have modified some ping parameters and verified them with the view-settings parameter: Another fundamental command, based on ICMP is execute traceroute <dest>, that allows us to see all the hops (networking devices) that a network packet traverses, starting from the FortiGate to a destination (which can be an IP address or an FQDN). Having the full path shown can be important to detect a wrong or faulty hop along the path. The usefulness of traceroute is related to how many devices along the route allow the use of the ICMP protocol, but also if we use it only inside to troubleshoot our internal corporate network, the results of this simple command are extremely useful. To show the result of a traceroute and have fun along the way, we can use the so called "Star Wars Traceroute"; execute traceroute 216.81.59.173, that will show the opening crawl to Star Wars Episode IV (a result that was obtained making clever use of hostnames and routing). We can see a (small) part of the result in the following screenshot: The next logical step to debug problems at layer 3 of TCP/IP is to verify the routing table, something that we are able to do with the get router info routing-table all command. The resulting information text could be very lengthy, so we are able to filter the output using the parameters including: details: Show routing table details information rip: Show RIP routing table ospf: Show OSPF routing table isis: Show ISIS routing table static: Show static routing table connected: Show connected routing table database: Show routing information base The routing table shows the routing entries and their origin (the routing protocol that added an entry in the routing table). Summary In this article, the authors have made the understanding of the Base system diagnostics, the troubleshooting of routing, and layer 2 and layer 3 TCP/IP diagnostics better. Useful Links: vCloud Networks Network Virtualization and vSphere Supporting hypervisors by OpenNebula
Read more
  • 0
  • 0
  • 32921

article-image-creating-fabric-policies
Packt
06 Apr 2017
9 min read
Save for later

Creating Fabric Policies

Packt
06 Apr 2017
9 min read
In this article by Stuart Fordham, the author of the book Cisco ACI Cookbook, helps us to understand ACI and the APIC and also explains us how to create fabric policies. (For more resources related to this topic, see here.) Understanding ACI and the APIC ACI is for the data center. It is a fabric (which is just a fancy name for the layout of the components) that can span data centers using OTV or similar overlay technologies, but it is not for the WAN. We can implement a similar level of programmability on our WAN links through APIC-EM (Application Policy Infrastructure Controller Enterprise Module), which uses ISR or ASR series routers along with the APIC-EM virtual machine to control and program them. APIC and APIC-EM are very similar; just the object of their focus is different. APIC-EM is outside the scope of this book, as we will be looking at data center technologies. The APIC is our frontend. Through this, we can create and manage our policies, manage the fabric, create tenants, and troubleshoot. Most importantly, the APIC is not associated with the data path. If we lose the APIC for any reason, the fabric will continue to forward the traffic. To give you the technical elevator pitch, ACI uses a number of APIs (application programming interfaces) such as REST (Representational State Transfer) using languages such as JSON (JavaScript Object Notation) and XML (eXtensible Markup Language), as well as the CLI and the GUI to manage the fabric and other protocols such as OpFlex to supply the policies to the network devices. The first set (those that manage the fabric) are referred to as northbound protocols. Northbound protocols allow lower-level network components talk to higher-level ones. OpFlex is a southbound protocol. Southbound protocols (such as OpFlex and OpenFlow, which is another protocol you will hear in relation to SDN) allow the controllers to push policies down to the nodes (the switches). Figure 1 This is a very brief introduction to the how. Now let's look at the why. What does ACI give us that the traditional network does not? In a multi-tenant environment, we have defined goals. The primary purpose is that one tenant remain separate from another. We can achieve this in a number of ways. We could have each of the tenants in their own DMZ (demilitarized zone), with firewall policies to permit or restrict traffic as required. We could use VLANs to provide a logical separation between tenants. This approach has two drawbacks. It places a greater onus on the firewall to direct traffic, which is fine for northbound traffic (traffic leaving the data center) but is not suitable when the majority of the traffic is east-west bound (traffic between applications within the data center; see figure 2). Figure 2 We could use switches to provide layer-3 routing and use access lists to control and restrict traffic; these are well designed for that purpose. Also, in using VLANs, we are restricted to a maximum of 4,096 potential tenants (due to the 12-bit VLAN ID). An alternative would be to use VRFs (virtual routing and forwarding). VRFs are self-contained routing tables, isolated from each other unless we instruct the router or switch to share the routes by exporting and importing route targets (RTs). This approach is much better for traffic isolation, but when we need to use shared services, such as an Internet pipe, VRFs can become much harder to keep secure. One way around this would be to use route leaking. Instead of having a separate VRF for the Internet, this is kept in the global routing table and then leaked to both tenants. This maintains the security of the tenants, and as we are using VRFs instead of VLANs, we have a service that we can offer to more than 4,096 potential customers. However, we also have a much bigger administrative overhead. For each new tenant, we need more manual configuration, which increases our chances of human error. ACI allows us to mitigate all of these issues. By default, ACI tenants are completely separated from each other. To get them talking to each other, we need to create contracts, which specify what network resources they can and cannot see. There are no manual steps required to keep them separate from each other, and we can offer Internet access rapidly during the creation of the tenant. We also aren't bound by the 4,096 VLAN limit. Communication is through VXLAN, which raises the ceiling of potential segments (per fabric) to 16 million (by using a 24-bit segment ID). Figure 3 VXLAN is an overlay mechanism that encapsulates layer-2 frames within layer-4 UDP packets, also known as MAC-in-UDP (figure 3). Through this, we can achieve layer-2 communication across a layer-3 network. Apart from the fact that through VXLAN, tenants can be placed anywhere in the data center and that the number of endpoints far outnumbers the traditional VLAN approach, the biggest benefit of VXLAN is that we are no longer bound by the Spanning Tree Protocol. With STP, the redundant paths in the network are blocked (until needed). VXLAN, by contrast, uses layer-3 routing, which enables it to use equal-cost multipathing (ECMP) and link aggregation technologies to make use of all the available links, with recovery (in the event of a link failure) in the region of 125 microseconds. With VXLAN, we have endpoints, referred to as VXLAN Tunnel Endpoints (VTEPs), and these can be physical or virtual switchports. Head-End Replication (HER) is used to forward broadcast, unknown destination address, and multicast traffic, which is referred to (quite amusingly) as BUM traffic. This 16M limit with VXLAN is more theoretical, however. Truthfully speaking, we have a limit of around 1M entries in terms of MAC addresses, IPv4 addresses, and IPv6 addresses due to the size of the TCAM (ternary content-addressable memory). The TCAM is high-speed memory, used to speed up the reading of routing tables and performing matches against access control lists. The amount of available TCAM became a worry back in 2014 when the BGP routing table first exceeded 512 thousand routes, which was the maximum number supported by many of the Internet routers. The likelihood of having 1M entries within the fabric is also pretty rare, but even at 1M entries, ACI remains scalable in that the spine switches let the leaf switches know about only the routes and endpoints they need to know about. If you are lucky enough to be scaling at this kind of magnitude, however, it would be time to invest in more hardware and split the load onto separate fabrics. Still, a data center with thousands of physical hosts is very achievable. Creating fabric policies In this recipe, we will create an NTP policy and assign it to our pod. NTP is a good place to start, as having a common and synced time source is critical for third-party authentication, such as LDAP and logging. In this recipe, we will use the Quick Start menu to create an NTP policy, in which we will define our NTP servers. We will then create a POD policy and attach our NTP policy to it. Lastly, we'll create a POD profile, which calls the policy and applies it to our pod (our fabric). We can assign pods to different profiles, and we can share policies between policy groups. So we may have one NTP policy but different SNMP policies for different pods: The ACI fabric is very flexible in this respect. How to do it... From the Fabric menu, select Fabric Policies. From the Quick Start menu, select Create an NTP Policy: A new window will pop up, and here we'll give our new policy a name and (optional) description and enable it. We can also define any authentication keys, if the servers use them. Clicking on Next takes us to the next page, where we specify our NTP servers. Click on the plus sign on the right-hand side, and enter the IP address or Fully Qualified Domain Name (FQDN) of the NTP server(s): We can also select a management EPG, which is useful if the NTP servers are outside of our network. Then, click on OK. Click on Finish.  We can now see our custom policy under Pod Policies: At the moment, though, we are not using it: Clicking on Show Usage at the bottom of the screen shows that no nodes or policies are using the policy. To use the policy, we must assign it to a pod, as we can see from the Quick Start menu: Clicking on the arrow in the circle will show us a handy video on how to do this. We need to go into the policy groups under Pod Policies and create a new policy: To create the policy, click on the Actions menu, and select Create Pod Policy Group. Name the new policy PoD-Policy. From here, we can attach our NTP-POLICY to the PoD-Policy. To attach the policy, click on the drop-down next to Date Time Policy, and select NTP-POLICY from the list of options: We can see our new policy. We have not finished yet, as we still need to create a POD profile and assign the policy to the profile. The process is similar as before: we go to Profiles (under the Pod Policies menu), select Actions, and then Create Pod Profile: We give it a name and associate our policy to it. How it works... Once we create a policy, we must associate it with a POD policy. The POD policy must then be associated with a POD profile. We can see the results here: Our APIC is set to use the new profile, which will be pushed down to the spine and leaf nodes. We can also check the NTP status from the APIC CLI, using the command show ntp . apic1# show ntp nodeid remote refid st t when poll reach delay offset jitter -------- - --------------- -------- ----- -- ------ ------ ------- ------- -------- -------- 1 216.239.35.4 .INIT. 16 u - 16 0 0.000 0.000 0.000 apic1# Summary In this article, we have summarized the ACI and APIC and helps us to develop and create the fabric policies. Resources for Article: Further resources on this subject: The Fabric library – the deployment and development task manager [article] Web Services and Forms [article] The Alfresco Platform [article]
Read more
  • 0
  • 0
  • 32143
article-image-deploying-zabbix-proxy
Packt
11 Sep 2015
12 min read
Save for later

Deploying a Zabbix proxy

Packt
11 Sep 2015
12 min read
In this article by Andrea Dalle Vacche, author of the book Mastering Zabbix, Second Edition, you will learn the basics on how to deploy a Zabbix proxy on a Zabbix server. (For more resources related to this topic, see here.) A Zabbix proxy is compiled together with the main server if you add --enable-proxy to the compilation options. The proxy can use any kind of database backend, just as the server does, but if you don't specify an existing DB, it will automatically create a local SQLite database to store its data. If you intend to rely on SQLite, just remember to add --with-sqlite3 to the options as well. When it comes to proxies, it's usually advisable to keep things light and simple as much as we can; of course, this is valid only if the network design permits us to take this decision. A proxy DB will just contain configuration and measurement data that, under normal circumstances, is almost immediately synchronized with the main server. Dedicating a full-blown database to it is usually an overkill, so unless you have very specific requirements, the SQLite option will provide the best balance between performance and ease of management. If you didn't compile the proxy executable the first time you deployed Zabbix, just run configure again with the options you need for the proxies: $ ./configure --enable-proxy --enable-static --with-sqlite3 --with-net-snmp --with-libcurl --with-ssh2 --with-openipmi In order to build the proxy statically, you must have a static version of every external library needed. The configure script doesn't do this kind of check. Compile everything again using the following command: $ make Be aware that this will compile the main server as well; just remember not to run make install, nor copy the new Zabbix server executable over the old one in the destination directory. The only files you need to take and copy over to the proxy machine are the proxy executable and its configuration file. The $PREFIX variable should resolve to the same path you used in the configuration command (/usr/local by default): # cp src/zabbix_proxy/zabbix_proxy $PREFIX/sbin/zabbix_proxy # cp conf/zabbix_proxy.conf $PREFIX/etc/zabbix_proxy.conf Next, you need to fill out relevant information in the proxy's configuration file. The default values should be fine in most cases, but you definitely need to make sure that the following options reflect your requirements and network status: ProxyMode=0 This means that the proxy machine is in an active mode. Remember that you need at least as many Zabbix trappers on the main server as the number of proxies you deploy. Set the value to 1 if you need or prefer a proxy in the passive mode. The following code captures this discussion: Server=n.n.n.n This should be the IP number of the main Zabbix server or of the Zabbix node that this proxy should report to: Hostname=Zabbix proxy This must be a unique, case-sensitive name that will be used in the main Zabbix server's configuration to refer to the proxy: LogFile=/tmp/zabbix_proxy.log LogFileSize=1 DebugLevel=2 If you are using a small, embedded machine, you may not have much disk space to spare. In that case, you may want to comment all the options regarding the log file and let syslog send the proxy's log to another server on the Internet: # DBHost= # DBSchema= # DBUser= # DBPassword= # DBSocket= # DBPort= We need now create the SQLite database; this can be done with the following commands: $ mkdir –p /var/lib/sqlite/ $ sqlite3 /var/lib/sqlite/zabbix.db < /usr/share/doc/zabbix-proxy-sqlite3-2.4.4/create/schema.sql Now, in the DBName parameter, we need to specify the full path to our SQLite database: DBName=/var/lib/sqlite/zabbix.db The proxy will automatically populate and use a local SQLite database. Fill out the relevant information if you are using a dedicated, external database: ProxyOfflineBuffer=1 This is the number of hours that a proxy will keep monitored measurements if communications with the Zabbix server go down. Once the limit has been reached, the proxy will housekeep away the old data. You may want to double or triple it if you know that you have a faulty, unreliable link between the proxy and server. CacheSize=8M This is the size of the configuration cache. Make it bigger if you have a large number of hosts and items to monitor. Zabbix's runtime proxy commands There is a set of commands that you can run against the proxy to change runtime parameters. This set of commands is really useful if your proxy is struggling with items, in the sense that it is taking longer to deliver the items and maintain our Zabbix proxy up and running. You can force the configuration cache to get refreshed from the Zabbix server with the following: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R config_cache_reload This command will invalidate the configuration cache on the proxy side and will force the proxy to ask for the current configuration to our Zabbix server. We can also increase or decrease the log level quite easily at runtime with log_level_increase and log_level_decrease: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf –R log_level_increase This command will increase the log level for the proxy process; the same command also supports a target that can be PID, process type or process type, number here. What follow are a few examples. Increase the log level of the three poller process: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R log_level_increase=poller,3 Increase the log level of the PID to 27425: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R log_level_increase=27425 Increase or decrease the log level of icmp pinger or any other proxy processes with: $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R log_level_increase="icmp pinger" zabbix_proxy [28064]: command sent successfully $ zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf -R log_level_decrease="icmp pinger" zabbix_proxy [28070]: command sent successfully We can quickly see the changes reflected in the log file here: 28049:20150412:021435.841 log level has been increased to 4 (debug) 28049:20150412:021443.129 Got signal [signal:10(SIGUSR1),sender_pid:28034,sender_uid:501,value_int:770(0x00000302)]. 28049:20150412:021443.129 log level has been decreased to 3 (warning) Deploying a Zabbix proxy using RPMs Deploying a Zabbix proxy using the RPM is a very simple task. Here, there are fewer steps required as Zabbix itself distributes a prepackaged Zabbix proxy that is ready to use. What you need to do is simply add the official Zabbix repository with the following command that must be run from root: $ rpm –ivh http://repo.zabbix.com/zabbix/2.4/rhel/6/x86_64/zabbix-2.4.4-1.el6.x86_64.rpm Now, you can quickly list all the available zabbix-proxy packages with the following command, again from root: $ yum search zabbix-proxy ============== N/S Matched: zabbix-proxy ================ zabbix-proxy.x86_64 : Zabbix Proxy common files zabbix-proxy-mysql.x86_64 : Zabbix proxy compiled to use MySQL zabbix-proxy-pgsql.x86_64 : Zabbix proxy compiled to use PostgreSQL zabbix-proxy-sqlite3.x86_64 : Zabbix proxy compiled to use SQLite3 In this example, the command is followed by the relative output that lists all the available zabbix-proxy packages; here, all you have to do is choose between them and install your desired package: $ yum install zabbix-proxy-sqlite3 Now, you've already installed the Zabbix proxy, which can be started up with the following command: $ service zabbix-proxy start Starting Zabbix proxy: [ OK ] Please also ensure that you enable your Zabbix proxy when the server boots with the $ chkconfig zabbix-proxy on command. That done, if you're using iptables, it is important to add a rule to enable incoming traffic on the 10051 port (that is the standard Zabbix proxy port) or, in any case, against the port that is specified in the configuration file: ListenPort=10051 To do that, you simply need to edit the iptables configuration file /etc/sysconfig/iptables and add the following line right on the head of the file: -A INPUT -m state --state NEW -m tcp -p tcp --dport 10051 -j ACCEPT Then, you need to restart your local firewall from root using the following command: $ service iptables restart The log file is generated at /var/log/zabbix/zabbix_proxy.log: $ tail -n 40 /var/log/zabbix/zabbix_proxy.log 62521:20150411:003816.801 **** Enabled features **** 62521:20150411:003816.801 SNMP monitoring: YES 62521:20150411:003816.801 IPMI monitoring: YES 62521:20150411:003816.801 WEB monitoring: YES 62521:20150411:003816.801 VMware monitoring: YES 62521:20150411:003816.801 ODBC: YES 62521:20150411:003816.801 SSH2 support: YES 62521:20150411:003816.801 IPv6 support: YES 62521:20150411:003816.801 ************************** 62521:20150411:003816.801 using configuration file: /etc/zabbix/zabbix_proxy.conf As you can quickly spot, the default configuration file is located at /etc/zabbix/zabbix_proxy.conf. The only thing that you need to do is make the proxy known to the server and add monitoring objects to it. All these tasks are performed through the Zabbix frontend by just clicking on Admin | Proxies and then Create. This is shown in the following screenshot: Please take care to use the same Proxy name that you've used in the configuration file, which, in this case, is ZabbixProxy; you can quickly check with: $ grep Hostname= /etc/zabbix/zabbix_proxy.conf # Hostname= Hostname=ZabbixProxy Note how, in the case of an Active proxy, you just need to specify the proxy's name as already set in zabbix_proxy.conf. It will be the proxy's job to contact the main server. On the other hand, a Passive proxy will need an IP address or a hostname for the main server to connect to, as shown in the following screenshot: You don't have to assign hosts to proxies at creation time or only in the proxy's edit screen. You can also do that from a host configuration screen, as follows: One of the advantages of proxies is that they don't need much configuration or maintenance; once they are deployed and you have assigned some hosts to one of them, the rest of the monitoring activities are fairly transparent. Just remember to check the number of values per second that every proxy has to guarantee as expressed by the Required performance column in the proxies' list page: Values per second (VPS) is the number of measurements per second that a single Zabbix server or proxy has to collect. It's an average value that depends on the number of items and the polling frequency for every item. The higher the value, the more powerful the Zabbix machine must be. Depending on your hardware configuration, you may need to redistribute the hosts among proxies or add new ones if you notice degraded performances coupled with high VPS. Considering a different Zabbix proxy database Nowadays, from Zabbix 2.4 the support for nodes has been discontinued, and the only distributed scenario available is limited to the Zabbix proxy; those proxies now play a truly critical role. Also, with proxies deployed in many different geographic locations, the infrastructure is more subject to network outages. That said, there is a case to consider which database we want to use for those critical remote proxies. Now SQLite3 is a good product as a standalone and lightweight setup, but if, in our scenario, the proxy we've deployed needs to retain a considerable amount of metrics, we need to consider the fact that SQLite3 has certain weak spots: The atomic-locking mechanism on SQLite3 is not the most robust ever SQLite3 suffers during high-volume writes SQLite3 does not implement any kind of user authentication mechanism Apart from the point that SQLite3 does not implement any kind of authentication mechanism, the database files are created with the standard unmask, due to which, they are readable by everyone, In the event of a crash during high load it is not the best database to use. Here is an example of the sqlite3 database and how to access it using a third-party account: $ ls -la /tmp/zabbix_proxy.db -rw-r--r--. 1 zabbix zabbix 867328 Apr 12 09:52 /tmp/zabbix_proxy.db ]# su - adv [adv@localhost ~]$ sqlite3 /tmp/zabbix_proxy.db SQLite version 3.6.20 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> Then, for all the critical proxies, it is advisable to use a different database. Here, we will use MySQL, which is a well-known database. To install the Zabbix proxy with MySQL, if you're compiling it from source, you need to use the following command line: $ ./configure --enable-proxy --enable-static --with-mysql --with-net-snmp --with-libcurl --with-ssh2 --with-openipmi This should be followed by the usual: $ make Instead, if you're using the precompiled rpm, you can simply run from root: $ yum install zabbix-proxy-mysql Now, you need to start up your MySQL database and create the required database for your proxy: $ mysql -uroot -p<password> $ create database zabbix_proxy character set utf8 collate utf8_bin; $ grant all privileges on zabbix_proxy.* to zabbix@localhost identified by '<password>'; $ quit; $ mysql -uzabbix -p<password> zabbix_proxy < database/mysql/schema.sql If you've installed using rpm, the previous command will be: $ mysql -uzabbix -p<password> zabbix_proxy < /usr/share/doc/zabbix-proxy-mysql-2.4.4/create/schema.sql/schema.sql Now, we need to configure zabbix_proxy.conf and add the proper value to those parameters: DBName=zabbix_proxy DBUser=zabbix DBPassword=<password> Please note that there is no need to specify DBHost as the socket used for MySQL. Finally, we can start up our Zabbix proxy with the following command from root: $ service zabbix-proxy start Starting Zabbix proxy: [ OK ] Summary In this article, you learned how to start up a Zabbix proxy over a Zabbix server. Resources for Article: Further resources on this subject: Zabbix Configuration[article] Bar Reports in Zabbix 1.8[article] Going beyond Zabbix agents [article]
Read more
  • 0
  • 0
  • 32058

article-image-api-and-intent-driven-networking
Packt
05 Apr 2017
19 min read
Save for later

API and Intent-Driven Networking

Packt
05 Apr 2017
19 min read
In this article by Eric Chou, author of the book Mastering Python Networking we will look at following topics: Treating Infrastructure as code and data modeling Cisco NX-API and Application centric infrastruucture (For more resources related to this topic, see here.) Infrastructure as Python code In a perfect world, network engineers and people who design and manage networks should focus on what they want the network to achieve instead of the device-level interactions. In my first job as an Intern for a local ISP, wide-eyed and excited, I received my first assignment to install a router on customer site to turn up their fractional frame relay link (remember those?). How would I do that? I asked, I was handed a standard operating procedure for turning up frame relay links. I went to the customer site, blindly type in the commands and looked at the green lights flash, happily packed my bag and pad myself on the back for a job well done. As exciting as that first assignment was, I did not fully understand what I was doing. I was simply following instructions without thinking about the implication of the commands I was typing in. How would I troubleshoot something if the light was red instead of green? I think I would have called back to the office. Of course network engineering is not about typing in commands onto a device, it is about building a way that allow services to be delivered from one point to another with as little friction as possible. The commands we have to use and the output we have to interpret are merely a mean to an end. I would like to hereby argue that we should focus as much on the Intent of the network as possible for an Intent-Driven Networking and abstract ourselves from the device-level interaction on an as-needed basis. In using API, it is my opinion that it gets us closer to a state of Intent-Driven Networking. In short, because we abstract the layer of specific command executed on destination device, we focus on our intent instead of the specific command given to the device. For example, if our intend is to deny an IP from entering our network, we might use 'access-list and access-group' on a Cisco and 'filter-list' on a Juniper. However, in using API, our program can start asking the executor for their Intent while masking what kind of physical device it is they are talking to. Screen Scraping vs. API Structured Output Imagine a common scenario where we need to log in to the device and make sure all the interfaces on the devices are in an up/up state (both status and protocol are showing as up). For the human network engineers getting to a Cisco NX-OS device, it is simple enough to issue the show IP interface brief command and easily tell from the output which interface is up: nx-osv-2# show ip int brief IP Interface Status for VRF "default"(1) Interface IP Address Interface Status Lo0 192.168.0.2 protocol-up/link-up/admin-up Eth2/1 10.0.0.6 protocol-up/link-up/admin-up nx-osv-2# The line break, white spaces, and the first line of column title are easily distinguished from the human eye. In fact, they are there to help us line up, say, the IP addresses of each interface from line 1 to line 2 and 3. If we were to put ourselves into the computer's eye, all these spaces and line breaks are only taking away from the real important output, which is 'which interfaces are in the up/up state'? To illustrate this point, we can look at the Paramiko output again: >>> new_connection.send('sh ip int briefn') 16 >>> output = new_connection.recv(5000) >>> print(output) b'sh ip int briefrrnIP Interface Status for VRF "default"(1)rnInterface IP Address Interface StatusrnLo0 192.168.0.2 protocol-up/link-up/admin-up rnEth2/1 10.0.0.6 protocol-up/link-up/admin-up rnrnxosv- 2# ' >>> If we were to parse out that data, of course there are many ways to do it, but here is what I would do in a pseudo code fashion: Split each line via line break. I may or may not need the first line that contain the executed command, for now, I don't think I need it. Take out everything on the second line up until the VRF and save that in a variable as we want to know which VRF the output is showing of. For the rest of the lines, because we do not know how many interfaces there are, we will do a regular expression to search if the line starts with possible interfaces, such as lo for loopback and 'Eth'. We will then split this line into three sections via space, each consist of name of interface, IP address, then the interface status. The interface status will then be split further using the forward slash (/) to give us the protocol, link, and admin status. Whew, that is a lot of work just for something that a human being can tell in a glance! You might be able to optimize the code and the number of lines, but in general this is what we need to do when we need to 'screen scrap' something that is somewhat unstructured. There are many downsides to this method, the few bigger problems that I see are: Scalability: We spent so much time in painstakingly details for each output, it is hard to imagine we can do this for the hundreds of commands that we typically run. Predicability: There is really no guarantee that the output stays the same. If the output is changed ever so slightly, it might just enter with our hard earned battle of information gathering. Vendor and software lock-in: Perhaps the biggest problem is that once we spent all these time parsing the output for this particular vendor and software version, in this case Cisco NX-OS, we need to repeat this process for the next vendor that we pick. I don't know about you, but if I were to evaluate a new vendor, the new vendor is at a severe on-boarding disadvantage if I had to re-write all the screen scrap code again. Let us compare that with an output from an NX-API call for the same 'show IP interface brief' command. We will go over the specifics of getting this output from the device later in this article, but what is important here is to compare the follow following output to the previous screen scraping steps: { "ins_api":{ "outputs":{ "output":{ "body":{ "TABLE_intf":[ { "ROW_intf":{ "admin-state":"up", "intf-name":"Lo0", "iod":84, "ip-disabled":"FALSE", "link-state":"up", "prefix":"192.168.0.2", "proto-state":"up" } }, { "ROW_intf":{ "admin-state":"up", "intf-name":"Eth2/1", "iod":36, "ip-disabled":"FALSE", "link-state":"up", "prefix":"10.0.0.6", "proto-state":"up" } } ], "TABLE_vrf":[ { "ROW_vrf":{ "vrf-name-out":"default" } }, { "ROW_vrf":{ "vrf-name-out":"default" } } ] }, "code":"200", "input":"show ip int brief", "msg":"Success" } }, "sid":"eoc", "type":"cli_show", "version":"1.2" } } NX-API can return output in XML or JSON, this is obviously the JSON output that we are looking at. Right away you can see the answered are structured and can be mapped directly to Python dictionary data structure. There is no parsing required, simply pick the key you want and retrieve the value associated with that key. There is also an added benefit of a code to indicate command success or failure, with a message telling the sender reasons behind the success or failure. You no longer need to keep track of the command issued, because it is already returned to you in the 'input' field. There are also other meta data such as the version of the NX-API. This type of exchange makes life easier for both vendors and operators. On the vendor side, they can easily transfer configuration and state information, as well as add and expose extra fields when the need rises. On the operator side, they can easily ingest the information and build their infrastructure around it. It is generally agreed on that automation is much needed and a good thing, the questions usually centered around which format and structure the automation should take place. As you can see later in this article, there are many competing technologies under the umbrella of API, on the transport side alone, we have REST API, NETCONF, RESTCONF, amongst others. Ultimately the overall market will decide, but in the mean time, we should all take a step back and decide which technology best suits our need. Data modeling for infrastructure as code According to Wikipedia, "A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to properties of the real world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner." The data modeling process can be illustrated in the following graph:  Data Modeling Process (source:  https://en.wikipedia.org/wiki/Data_model) When applied to networking, we can applied this concept as an abstract model that describe our network, let it be datacenter, campus, or global Wide Area Network. If we take a closer look at a physical datacenter, a layer 2 Ethernet switch can be think of as a device containing a table of Mac addresses mapped to each ports. Our switch data model described how the Mac address should be kept in a table, which are the keys, additional characteristics (think of VLAN and private VLAN), and such. Similarly, we can move beyond devices and map the datacenter in a model. We can start with the number of devices are in each of the access, distribution, core layer, how they are connected, and how they should behave in a production environment. For example, if we have a Fat-Tree network, how many links should each of the spine router should have, how many routes they should contain, and how many next-hop should each of the prefixes have. These characteristics can be mapped out in a format that can be referenced against as the ideal state that we should always checked against. One of the relatively new network data modeling language that is gaining traction is YANG. Yet Another Next Generation (YANG) (Despite common belief, some of the IETF workgroup do have a sense of humor). It was first published in RFC 6020 in 2010, and has since gain traction among vendors and operators. At the time of writing, the support for YANG varies greatly from vendors to platforms, the adaptation rate in production is therefore relatively low. However, it is a technology worth keeping an eye out for. Cisco API and ACI Cisco Systems, as the 800 pound gorilla in the networking space, has not missed on the trend of network automation. The problem has always been the confusion surrounding Cisco's various product lines and level of technology support. With product lines spans from routers, switches, firewall, servers (unified computing), wireless, collaboration software and hardware, analytic software, to name a few, it is hard to know where to start. Since this book focuses on Python and networking, we will scope the section to the main networking products. In particular we will cover the following: Nexus product automation with NX-API Cisco NETCONF and YANG examples Cisco application centric infrastructure for datacenter Cisco application centric infrastructure for enterprise For the NX-API and NETCONF examples here, we can either use the Cisco DevNet always-on lab devices or locally run Cisco VIRL. Since ACI is a separated produce and license on top of the physical switches, for the following ACI examples, I would recommend using the DevNet labs to get an understanding of the tools. Unless, of course, that you are one of the lucky ones who have a private ACI lab that you can use. Cisco NX-API Nexus is Cisco's product line of datacenter switches. NX-API (http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/programmability/guide/b_Cisco_Nexus_9000_Series_NX-OS_Programmability_Guide/b_Cisco_Nexus_9000_Series_NX-OS_Programmability_Guide_chapter_011.html) allows the engineer to interact with the switch outside of the device via a variety of transports, including SSH, HTTP, and HTTPS. Installation and preparation Here are the Ubuntu Packages that we will install, you may already have some of the packages such as Python development, pip, and Git: $ sudo apt-get install -y python3-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev zlib1g-dev python3-pip git python3-requests If you are using Python 2: sudo apt-get install -y python-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev zlib1g-dev python-pip git python-requests The ncclient (https://github.com/ncclient/ncclient) library is a Python library for NETCONF clients, we will install from the GitHub repository to install the latest version: $ git clone https://github.com/ncclient/ncclient $ cd ncclient/ $ sudo python3 setup.py install $ sudo python setup.py install NX-API on Nexus devices is off by default, we will need to turn it on. We can either use the user already created or create a new user for the NETCONF procedures. feature nxapi username cisco password 5 $1$Nk7ZkwH0$fyiRmMMfIheqE3BqvcL0C1 role networkopera tor username cisco role network-admin username cisco passphrase lifetime 99999 warntime 14 gracetime 3 For our lab, we will turn on both HTTP and sandbox configuration, they should be turned off in production. nx-osv-2(config)# nxapi http port 80 nx-osv-2(config)# nxapi sandbox We are now ready to look at our first NX-API example. NX-API examples Since we have turned on sandbox, we can launch a web browser and take a look at the various message format, requests, and response based on the CLI command that we are already familiar with. In the following example, I selected JSON-RPC and CLI command type for the command show version:   The sandbox comes in handy if you are unsure about the supportability of message format or if you have questions about the field key for which the value you want to retrieve in your code. In our first example, we are just going to connect to the Nexus device and print out the capabilities exchanged when the connection was first made: #!/usr/bin/env python3 from ncclient import manager conn = manager.connect( host='172.16.1.90', port=22, username='cisco', password='cisco', hostkey_verify=False, device_params={'name': 'nexus'}, look_for_keys=False) for value in conn.server_capabilities: print(value) conn.close_session() The connection parameters of host, port, username, and password are pretty self-explanatory. The device parameter specifies the kind of device the client is connecting to, as we will also see a differentiation in the Juniper NETCONF sections. The hostkey_verify bypass the known_host requirement for SSH while the look_for_keys option disables key authentication but use username and password for authentication. The output will show that the XML and NETCONF supported feature by this version of NX-OS: $ python3 cisco_nxapi_1.py urn:ietf:params:xml:ns:netconf:base:1.0 urn:ietf:params:netconf:base:1.0 Using ncclient and NETCONF over SSH is great because it gets us closer to the native implementation and syntax. We will use the library more later on. For NX-API, I personally feel that it is easier to deal with HTTPS and JSON-RPC. In the earlier screenshot of NX-API Developer Sandbox, if you noticed in the Request box, there is a box labeled Python. If you click on it, you would be able to get an automatically converted Python script based on the requests library.  Requests is a very popular, self-proclaimed HTTP for humans library used by companies like Amazon, Google, NSA, amongst others. You can find more information about it on the official site (http://docs.python-requests.org/en/master/). For the show version example, the following Python script is automatically generated for you. I am pasting in the output without any modification: """ NX-API-BOT """ import requests import json """ Modify these please """ url='http://YOURIP/ins' switchuser='USERID' switchpassword='PASSWORD' myheaders={'content-type':'application/json-rpc'} payload=[ { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "show version", "version": 1.2 }, "id": 1 } ] response = requests.post(url,data=json.dumps(payload), headers=myheaders,auth=(switchuser,switchpassword)).json() In cisco_nxapi_2.py file, you will see that I have only modified the URL, username and password of the preceding file, and parse the output to only include the software version. Here is the output: $ python3 cisco_nxapi_2.py 7.2(0)D1(1) [build 7.2(0)ZD(0.120)] The best part about using this method is that the same syntax works with both configuration command as well as show commands. This is illustrated in cisco_nxapi_3.py file. For multi-line configuration, you can use the id field to specify the order of operations. In cisco_nxapi_4.py, the following payload was listed for changing the description of interface Ethernet 2/12 in the interface configuration mode. { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "interface ethernet 2/12", "version": 1.2 }, "id": 1 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "description foo-bar", "version": 1.2 }, "id": 2 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "end", "version": 1.2 }, "id": 3 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "copy run start", "version": 1.2 }, "id": 4 } ] In the next section, we will look at examples for Cisco NETCONF and YANG model. Cisco and YANG model Earlier in the article, we looked at the possibility of expressing the network using data modeling language YANG. Let us look into it a little bit. First off, we should know that YANG only defines the type of data sent over NETCONF protocol and NETCONF exists as a standalone protocol as we saw in the NX-API section. YANG being relatively new, the supportability is spotty across vendors and product lines. For example, if we run the same capability exchange script we saw preceding to a Cisco 1000v running  IOS-XE, this is what we would see: urn:cisco:params:xml:ns:yang:cisco-virtual-service?module=ciscovirtual- service&revision=2015-04-09 http://tail-f.com/ns/mibs/SNMP-NOTIFICATION-MIB/200210140000Z? module=SNMP-NOTIFICATION-MIB&revision=2002-10-14 urn:ietf:params:xml:ns:yang:iana-crypt-hash?module=iana-crypthash& revision=2014-04-04&features=crypt-hash-sha-512,crypt-hashsha- 256,crypt-hash-md5 urn:ietf:params:xml:ns:yang:smiv2:TUNNEL-MIB?module=TUNNELMIB& revision=2005-05-16 urn:ietf:params:xml:ns:yang:smiv2:CISCO-IP-URPF-MIB?module=CISCOIP- URPF-MIB&revision=2011-12-29 urn:ietf:params:xml:ns:yang:smiv2:ENTITY-STATE-MIB?module=ENTITYSTATE- MIB&revision=2005-11-22 urn:ietf:params:xml:ns:yang:smiv2:IANAifType-MIB?module=IANAifType- MIB&revision=2006-03-31 <omitted> Compare that to the output that we saw, clearly IOS-XE understand more YANG model than NX-OS. Industry wide network data modeling for networking is clearly something that is beneficial to network automation. However, given the uneven support across vendors and products, it is not something that is mature enough to be used across your production network, in my opinion. For the book I have included a script called cisco_yang_1.py that showed how to parse out NETCONF XML output with YANG filters urn:ietf:params:xml:ns:yang:ietf-interfaces as a starting point to see the existing tag overlay. You can check the latest vendor support on the YANG Github project page (https://github.com/YangModels/yang/tree/master/vendor). Cisco ACI Cisco Application Centric Infrastructure (ACI) is meant to provide a centralized approach to all of the network components. In the datacenter context, it means the centralized controller is aware of and manages the spine, leaf, top of rack switches as well as all the network service functions. This can be done thru GUI, CLI, or API. Some might argue that the ACI is Cisco's answer to the broader defined software defined networking. One of the somewhat confusing point for ACI, is the difference between ACI and ACI-EM. In short, ACI focuses on datacenter operations while ACI-EM focuses on enterprise modules. Both offers a centralized view and control of the network components, but each has it own focus and share of tools. For example, it is rare to see any major datacenter deploy customer facing wireless infrastructure but wireless network is a crucial part of enterprises today. Another example would be the different approaches to network security. While security is important in any network, in the datacenter environment lots of security policy is pushed to the edge node on the server for scalability, in enterprises security policy is somewhat shared between the network devices and servers. Unlike NETCONF RPC, ACI API follows the REST model to use the HTTP verb (GET, POST, PUT, DELETE) to specify the operation intend. We can look at the cisco_apic_em_1.py file, which is a modified version of the Cisco sample code on lab2-1-get-network-device-list.py (https://github.com/CiscoDevNet/apicem-1.3-LL-sample-codes/blob/master/basic-labs/lab2-1-get-network-device-list.py). The abbreviated section without comments and spaces are listed here. The first function getTicket() uses HTTPS POST on the controller with path /api/vi/ticket with username and password embedded in the header. Then parse the returned response for a ticket with limited valid time. def getTicket(): url = "https://" + controller + "/api/v1/ticket" payload = {"username":"usernae","password":"password"} header = {"content-type": "application/json"} response= requests.post(url,data=json.dumps(payload), headers=header, verify=False) r_json=response.json() ticket = r_json["response"]["serviceTicket"] return ticket The second function then calls another path /api/v1/network-devices with the newly acquired ticket embedded in the header, then parse the results. url = "https://" + controller + "/api/v1/network-device" header = {"content-type": "application/json", "X-Auth-Token":ticket} The output displays both the raw JSON response output as well as a parsed table. A partial output when executed against a DevNet lab controller is shown here: Network Devices = { "version": "1.0", "response": [ { "reachabilityStatus": "Unreachable", "id": "8dbd8068-1091-4cde-8cf5-d1b58dc5c9c7", "platformId": "WS-C2960C-8PC-L", &lt;omitted&gt; "lineCardId": null, "family": "Wireless Controller", "interfaceCount": "12", "upTime": "497 days, 2:27:52.95" } ] } 8dbd8068-1091-4cde-8cf5-d1b58dc5c9c7 Cisco Catalyst 2960-C Series Switches cd6d9b24-839b-4d58-adfe-3fdf781e1782 Cisco 3500I Series Unified Access Points &lt;omitted&gt; 55450140-de19-47b5-ae80-bfd741b23fd9 Cisco 4400 Series Integrated Services Routers ae19cd21-1b26-4f58-8ccd-d265deabb6c3 Cisco 5500 Series Wireless LAN Controllers As one can see, we only query a single controller device, but we are able to get a high level view of all the network devices that the controller is aware of. The downside is, of course, the ACI controller only supports Cisco devices at this time. Summary In this article, we looked at various ways to communicate and manage network devices from Cisco. Resources for Article: Further resources on this subject: Network Exploitation and Monitoring [article] Introduction to Web Experience Factory [article] Web app penetration testing in Kali [article]
Read more
  • 0
  • 0
  • 30892
Modal Close icon
Modal Close icon