Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Servers

95 Articles
article-image-virtualizing-hosts-and-applications
Packt
20 May 2016
19 min read
Save for later

Virtualizing Hosts and Applications

Packt
20 May 2016
19 min read
In this article by Jay LaCroix, the author of the book Mastering Ubuntu Server, you will learn how there have been a great number of advancements in the IT space in the last several decades, and a few technologies have come along that have truly revolutionized the technology industry. The author is sure few would argue that the Internet itself is by far the most revolutionary technology to come around, but another technology that has created a paradigm shift in IT is virtualization. It evolved the way we maintain our data centers, allowing us to segregate workloads into many smaller machines being run from a single server or hypervisor. Since Ubuntu features the latest advancements of the Linux kernel, virtualization is actually built right in. After installing just a few packages, we can create virtual machines on our Ubuntu Server installation without the need for a pricey license agreement or support contract. In this article, Jay will walk you through creating, running, and managing Docker containers. (For more resources related to this topic, see here.) Creating, running, and managing Docker containers Docker is a technology that seemed to come from nowhere and took the IT world by storm just a few years ago. The concept of containerization is not new, but Docker took this concept and made it very popular. The idea behind a container is that you can segregate an application you'd like to run from the rest of your system, keeping it sandboxed from the host operating system, while still being able to use the host's CPU and memory resources. Unlike a virtual machine, a container doesn't have a virtual CPU and memory of its own, as it shares resources with the host. This means that you will likely be able to run more containers on a server than virtual machines, since the resource utilization would be lower. In addition, you can store a container on a server and allow others within your organization to download a copy of it and run it locally. This is very useful for developers developing a new solution and would like others to test or run it. Since the Docker container contains everything the application needs to run, it's very unlikely that a systematic difference between one machine or another will cause the application to behave differently. The Docker server, also known as Hub, can be used remotely or locally. Normally, you'd pull down a container from the central Docker Hub instance, which will make various containers available, which are usually based on a Linux distribution or operating system. When you download it locally, you'll be able to install packages within the container or make changes to its files, just as if it were a virtual machine. When you finish setting up your application within the container, you can upload it back to Docker Hub for others to benefit from or your own local Hub instance for your local staff members to use. In some cases, some developers even opt to make their software available to others in the form of containers rather than creating distribution-specific packages. Perhaps they find it easier to develop a container that can be used on every distribution than build separate packages for individual distributions. Let's go ahead and get started. To set up your server to run or manage Docker containers, simply install the docker.io package: # apt-get install docker.io Yes, that's all there is to it. Installing Docker has definitely been the easiest thing we've done during this entire article. Ubuntu includes Docker in its default repositories, so it's only a matter of installing this one package. You'll now have a new service running on your machine, simply titled docker. You can inspect it with the systemctl command, as you would any other: # systemctl status docker Now that Docker is installed and running, let's take it for a test drive. Having Docker installed gives us the docker command, which has various subcommands to perform different functions. Let's try out docker search: # docker search ubuntu What we're doing with this command is searching Docker Hub for available containers based on Ubuntu. You could search for containers based on other distributions, such as Fedora or CentOS, if you wanted. The command will return a list of Docker images available that meet your search criteria. The search command was run as root. This is required, unless you make your own user account a member of the docker group. I recommend you do that and then log out and log in again. That way, you won't need to use root anymore. From this point on, I won't suggest using root for the remaining Docker examples. It's up to you whether you want to set up your user account with the docker group or continue to run docker commands as root. To pull down a docker image for our use, we can use the docker pull command, along with one of the image names we saw in the output of our search command: docker pull ubuntu With this command, we're pulling down the latest Ubuntu container image available on Docker Hub. The image will now be stored locally, and we'll be able to create new containers from it. To create a new container from our downloaded image, this command will do the trick: docker run -it ubuntu:latest /bin/bash Once you run this command, you'll notice that your shell prompt immediately changes. You're now within a shell prompt from your container. From here, you can run commands you would normally run within a real Ubuntu machine, such as installing new packages, changing configuration files, and so on. Go ahead and play around with the container, and then we'll continue on with a bit more theory on how it actually works. There are some potentially confusing aspects of Docker we should get out of the way first before we continue with additional examples. The most likely thing to confuse newcomers to Docker is how containers are created and destroyed. When you execute the docker run command against an image you've downloaded, you're actually creating a container. Each time you use the docker run command, you're not resuming the last container, but creating a new one. To see this in action, run a container with the docker run command provided earlier, and then type exit. Run it again, and then type exit again. You'll notice that the prompt is different each time you run the command. After the root@ portion of the bash prompt within the container is a portion of a container ID. It'll be different each time you execute the docker run command, since you're creating a new container with a new ID each time. To see the number of containers on your server, execute the docker info command. The first line of the output will tell you how many containers you have on your system, which should be the number of times you've run the docker run command. To see a list of all of these containers, execute the docker ps -a command: docker ps -a The output will give you the container ID of each container, the image it was created from, the command being run, when the container was created, its status, and any ports you may have forwarded. The output will also display a randomly generated name for each container, and these names are usually quite wacky. As I was going through the process of creating containers while writing this section, the codenames for my containers were tender_cori, serene_mcnulty, and high_goldwasser. This is just one of the many quirks of Docker, and some of these can be quite hilarious. The important output of the docker ps -a command is the container ID, the command, and the status. The ID allows you to reference a specific container. The command lets you know what command was run. In our example, we executed /bin/bash when we started our containers. Using the ID, we can resume a container. Simply execute the docker start command with the container ID right after. Your command will end up looking similar to the following: docker start 353c6fe0be4d The output will simply return the ID of the container and then drop you back to your shell prompt. Not the shell prompt of your container, but that of your server. You might be wondering at this point, then, how you get back to the shell prompt for the container. We can use docker attach for that: docker attach 353c6fe0be4d You should now be within a shell prompt inside your container. If you remember from earlier, when you type exit to disconnect from your container, the container stops. If you'd like to exit the container without stopping it, press CTRL + P and then CTRL + Q on your keyboard. You'll return to your main shell prompt, but the container will still be running. You can see this for yourself by checking the status of your containers with the docker ps -a command. However, while these keyboard shortcuts work to get you out of the container, it's important to understand what a container is and what it isn't. A container is not a service running in the background, at least not inherently. A container is a collection of namespaces, such as a namespace for its filesystem or users. When you disconnect without a process running within the container, there's no reason for it to run, since its namespace is empty. Thus, it stops. If you'd like to run a container in a way that is similar to a service (it keeps running in the background), you would want to run the container in detached mode. Basically, this is a way of telling your container, "run this process, and don't stop running it until I tell you to." Here's an example of creating a container and running it in detached mode: docker run -dit ubuntu /bin/bash Normally, we use the -it options to create a container. This is what we used a few pages back. The -i option triggers interactive mode, while the -t option gives us a psuedo-TTY. At the end of the command, we tell the container to run the Bash shell. The -d option runs the container in the background. It may seem relatively useless to have another Bash shell running in the background that isn't actually performing a task. But these are just simple examples to help you get the hang of Docker. A more common use case may be to run a specific application. In fact, you can even run a website from a Docker container by installing and configuring Apache within the container, including a virtual host. The question then becomes this: how do you access the container's instance of Apache within a web browser? The answer is port redirection, which Docker also supports. Let's give this a try. First, let's create a new container in detached mode. Let's also redirect port 80 within the container to port 8080 on the host: docker run -dit -p 8080:80 ubuntu /bin/bash The command will output a container ID. This ID will be much longer than you're accustomed to seeing, because when we run docker ps -a, it only shows shortened container IDs. You don't need to use the entire container ID when you attach; you can simply use part of it, so long as it's long enough to be different from other IDs—like this: docker attach dfb3e Here, I've attached to a container with an ID that begins with dfb3e. I'm now attached to a Bash shell within the container. Let's install Apache. We've done this before, but to keep it simple, just install the apache2 package within your container, we don't need to worry about configuring the default sample web page or making it look nice. We just want to verify that it works. Apache should now be installed within the container. In my tests, the apache2 daemon wasn't automatically started as it would've been on a real server instance. Since the latest container available on Docker Hub for Ubuntu hasn't yet been upgraded to 16.04 at the time of writing this (it's currently 14.04), the systemctl command won't work, so we'll need to use the legacy start command for Apache: # /etc/init.d/apache2 start We can similarly check the status, to make sure it's running: # /etc/init.d/apache2 status Apache should be running within the container. Now, press CTRL + P and then CTRL + Q to exit the container, but allow it to keep running in the background. You should be able to visit the sample Apache web page for the container by navigating to localhost:8080 in your web browser. You should see the default "It works!" page that comes with Apache. Congratulations, you're officially running an application within a container! Before we continue, think for a moment of all the use cases you can use Docker for. It may seem like a very simple concept (and it is), but it allows you to do some very powerful things. I'll give you a personal example. At a previous job, I worked with some embedded Linux software engineers, who each had their preferred Linux distribution to run on their workstation computers. Some preferred Ubuntu, others preferred Debian, and a few even ran Gentoo. For developers, this poses a problem—the build tools are different in each distribution, because they all ship different versions of all development packages. The application they developed was only known to compile in Debian, and newer versions of the GNU Compiler Collection (GCC) compiler posed a problem for the application. My solution was to provide each developer a Docker container based on Debian, with all the build tools baked in that they needed to perform their job. At this point, it no longer mattered which distribution they ran on their workstations. The container was the same no matter what they were running. I'm sure there are some clever use cases you can come up with. Anyway, back to our Apache container: it's now running happily in the background, responding to HTTP requests over port 8080 on the host. But, what should we do with it at this point? One thing we can do is create our own image from it. Before we do, we should configure Apache to automatically start when the container is started. We'll do this a bit differently inside the container than we would on an actual Ubuntu server. Attach to the container, and open the /etc/bash.bashrc file in a text editor within the container. Add the following to the very end of the file: /etc/init.d/apache2 start Save the file, and exit your editor. Exit the container with the CTRL + P and CTRL + Q key combinations. We can now create a new image of the container with the docker commit command: docker commit <Container ID> ubuntu:apache-server This command will return to us the ID of our new image. To view all the Docker images available on our machine, we can run the docker images command to have Docker return a list. You should see the original Ubuntu image we downloaded, along with the one we just created. We'll first see a column for the repository the image came from. In our case, it's Ubuntu. Next, we can see the tag. Our original Ubuntu image (the one we used docker pull command to download) has a tag of latest. We didn't specify that when we first downloaded it, it just defaulted to latest. In addition, we see an image ID for both, as well as the size. To create a new container from our new image, we just need to use docker run but specify the tag and name of our new image. Note that we may already have a container listening on port 8080, so this command may fail if that container hasn't been stopped: docker run -dit -p 8080:80 ubuntu:apache-server /bin/bash Speaking of stopping a container, I should probably show you how to do that as well. As you could probably guess, the command is docker stop followed by a container ID. This will send the SIGTERM signal to the container, followed by SIGKILL if it doesn't stop on its own after a delay: docker stop <Container ID> To remove a container, issue the docker rm command followed by a container ID. Normally, this will not remove a running container, but it will if you add the -f option. You can remove more than one docker container at a time by adding additional container IDs to the command, with a space separating each. Keep in mind that you'll lose any unsaved changes within your container if you haven't committed the container to an image yet: docker rm <Container ID> The docker rm command will not remove images. If you want to remove a docker image, use the docker rmi command followed by an image ID. You can run the docker image command to view images stored on your server, so you can easily fetch the ID of the image you want to remove. You can also use the repository and tag name, such as ubuntu:apache-server, instead of the image ID. If the image is in use, you can force its removal with the -f option: docker rmi <Image ID> Before we conclude our look into Docker, there's another related concept you'll definitely want to check out: Dockerfiles. A Dockerfile is a neat way of automating the building of docker images, by creating a text file with a set of instructions for their creation. The easiest way to set up a Dockerfile is to create a directory, preferably with a descriptive name for the image you'd like to create (you can name it whatever you wish, though) and inside it create a file named Dockerfile. Following is a sample—copy this text into your Dockerfile and we'll look at how it works: FROM ubuntu MAINTAINER Jay <jay@somewhere.net> # Update the container's packages RUN apt-get update; apt-get dist-upgrade # Install apache2 and vim RUN apt-get install -y apache2 vim # Make Apache automatically start-up` RUN echo "/etc/init.d/apache2 start" >> /etc/bash.bashrc Let's go through this Dockerfile line by line to get a better understanding of what it's doing: FROM ubuntu We need an image to base our new image on, so we're using Ubuntu as a base. This will cause Docker to download the ubuntu:latest image from Docker Hub if we don't already have it downloaded: MAINTAINER Jay <myemail@somewhere.net> Here, we're setting the maintainer of the image. Basically, we're declaring its author: # Update the container's packages Lines beginning with a hash symbol (#) are ignored, so we are able to create comments within the Dockerfile. This is recommended to give others a good idea of what your Dockerfile does: RUN apt-get update; apt-get dist-upgrade -y With the RUN command, we're telling Docker to run a specific command while the image is being created. In this case, we're updating the image's repository index and performing a full package update to ensure the resulting image is as fresh as can be. The -y option is provided to suppress any requests for confirmation while the command runs: RUN apt-get install -y apache2 vim Next, we're installing both apache2 and vim. The vim package isn't required, but I personally like to make sure all of my servers and containers have it installed. I mainly included it here to show you that you can install multiple packages in one line: RUN echo "/etc/init.d/apache2 start" >> /etc/bash.bashrc Earlier, we copied the startup command for the apache2 daemon into the /etc/bash.bashrc file. We're including that here so that we won't have to do this ourselves when containers are crated from the image. To build the image, we can use the docker build command, which can be executed from within the directory that contains the Dockerfile. What follows is an example of using the docker build command to create an image tagged packt:apache-server: docker build -t packt:apache-server Once you run this command, you'll see Docker create the image for you, running each of the commands you asked it to. The image will be set up just the way you like. Basically, we just automated the entire creation of the Apache container we used as an example in this section. Once this is complete, we can create a container from our new image: docker run -dit -p 8080:80 packt:apache-server /bin/bash Almost immediately after running the container, the sample Apache site will be available on the host. With a Dockerfile, you'll be able to automate the creation of your Docker images. There's much more you can do with Dockerfiles though; feel free to peruse Docker's official documentation to learn more. Summary In this article, we took a look at virtualization as well as containerization. We began by walking through the installation of KVM as well as all the configuration required to get our virtualization server up and running. We also took a look at Docker, which is a great way of virtualizing individual applications rather than entire servers. We installed Docker on our server, and we walked through managing containers by pulling down an image from Docker Hub, customizing our own images, and creating Dockerfiles to automate the deployment of Docker images. We also went over many of the popular Docker commands to manage our containers. Resources for Article: Further resources on this subject: Configuring and Administering the Ubuntu Server[article] Network Based Ubuntu Installations[article] Making the most of Ubuntu through Windows Proxies[article]
Read more
  • 0
  • 0
  • 25196

article-image-new-for-2020-in-operations-and-infrastructure-engineering
Richard Gall
19 Dec 2019
5 min read
Save for later

New for 2020 in operations and infrastructure engineering

Richard Gall
19 Dec 2019
5 min read
It’s an exciting time if you work in operations and software infrastructure. Indeed, you could even say that as the pace of change and innovation increases, your role only becomes more important. Operations and systems engineers, solution architects, everyone - you’re jobs are all about bringing stability, order and control into what can sometimes feel like chaos. As anyone that’s been working in the industry knows, managing change, from a personal perspective, requires a lot of effort. To keep on top of what’s happening in the industry - what tools are being released and updated, what approaches are gaining traction - you need to have one eye on the future and the wider industry. To help you with that challenge and get you ready for 2020, we’ve put together a list of what’s new for 2020 - and what you should start learning. Learn how to make Kubernetes work for you It goes without saying that Kubernetes was huge in 2019. But there are plenty of murmurs and grumblings that it’s too complicated and adds an additional burden for engineering and operations teams. To a certain extent there’s some truth in this - and arguably now would be a good time to accept that just because it seems like everyone is using Kubernetes, it doesn’t mean it’s the right solution for you. However, having said that, 2020 will be all about understanding how to make Kubernetes relevant to you. This doesn’t mean you should just drop the way you work and start using Kubernetes, but it does mean that spending some time with the platform and getting a better sense of how it could be used in the future is a useful way to spend your learning time in 2020. Explore Packt's extensive range of Kubernetes eBooks and videos on the Packt store. Learn how to architect If software has eaten the world, then by the same token perhaps complexity has well and truly eaten software as we know it. Indeed, Kubernetes is arguably just one of the symptoms and causes of this complexity. Another is the growing demand for architects in engineering and IT teams. There are a number of different ‘architecture’ job roles circulating across the industry, from solutions architect to application architect. While they each have their own subtle differences, and will even vary from company to company, they’re all roles that are about organizing and managing different pieces into something that is both stable and value-driving. Cloud has been particularly instrumental in making architect roles more prominent in the industry. As organizations look to resist the pitfalls of lock-in and better manage resources (financial and otherwise), it will be down to architects to balance business and technology concerns carefully. Learn how to architect cloud native applications. Read Architecting Cloud Computing Solutions. Get to grips with everything you need to know to be a software architect. Pick up Software Architect's Handbook. Artificial intelligence It’s strange that the hype around AI doesn’t seem to have reached the world of ops. Perhaps this is because the area is more resistant to the spin that comes with AI, preferring instead to focus more on the technical capabilities of tools and platforms. Whatever the case, it’s nevertheless true that AI will play an important part in how we manage and secure infrastructure. From monitoring system health, to automating infrastructure deployments and configuration, and even identifying security threats, artificial intelligence is already an important component for operations engineers and others. Indeed, artificial intelligence is being embedded inside products and platforms that ops teams are using - this means the need to ‘learn’ artificial intelligence is somewhat reduced. But it would be wrong to think it’s something that can just be managed from a dashboard. In 2020 it will be essential to better understand where and how artificial intelligence can fit into your operations and architectural toolchain. Find artificial intelligence eBooks and videos in Packt's collection of curated data science bundles. Observability, monitoring, tracing, and logging One of the challenges of software complexity is understanding exactly what’s going on under the hood. Yes, the network might be unreliable, as the saying goes, but what makes things even worse is that we’re not even sure why. This is where observability and the next generation of monitoring, logging and tracing all come into play. Having detailed insights into how applications and infrastructures are performing, how resources are being managed, and what things are actually causing problems is vitally important from a team perspective. Without the ability to understand these things, it can put pressure on teams as knowledge becomes siloed inside the brains of specific engineers. It makes you vulnerable to failure as you start to have points of failure at a personnel level. There are, of course, a wide range of tools and products available that can make monitoring and tracing easy (or easier, at least). But understanding which ones are right for your needs still requires some time learning and exploring the options out there. Make sure you do exactly that in 2020. Learn how to monitor distributed systems with Learn Centralized Logging and Monitoring with Kubernetes. Making serverless a reality We’ve talked about serverless a lot this year. But as a concept there’s still considerable confusion about what role it should play in modern DevOps processes. Indeed, even the nomenclature is a little confusing. Platforms using their own terminology, such as ‘lambdas’ and ‘functions’, only adds to the sense that serverless is something amorphous and hard to pin down. So, in 2020, we need to work out how to make serverless work for us. Just as we need to consider how Kubernetes might be relevant to our needs, we need to consider in what ways serverless represents both a technical and business opportunity. Search Packt's library for the latest serverless eBooks and videos. Explore more technology eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 24454

article-image-creating-web-page-displaying-data-sql-server-2008
Packt
23 Oct 2009
5 min read
Save for later

Creating a Web Page for Displaying Data from SQL Server 2008

Packt
23 Oct 2009
5 min read
This article by Jayaram Krishnaswamy describes how you may connect to SQL Server 2008 and display the retrieved data in a GridView Control on a web page. Trying to establish a connection to the SQL Server 2008 is not possible in Visual Studio 2008 as you will see soon in the tutorial. One way to get around this, as shown in this tutorial, is to create an ODBC connection to the SQL Server and then using the ODBC connection to retrieve the data. Visual Studio 2008 Version: 9.0.21022.8 RTM, Microsoft Windows XP Professional Media Center Edition, and SQL Server 'Katmai' were used for this tutorial. (For more resources on Microsoft, see here.) Connecting to SQL Server 2008 is Not Natively Supported in Microsoft Visual Studio 2008 Designer In the Visual Studio 2008 IDE make a right click on the Data Connections node in the Server Explorer. This will open up the Add Connection window where the default connection being displayed is MS SQL Server Compact. Click on the Change... button which opens the Change Data Source window shown in the next figure. Highlight Microsoft SQL Server as shown and click on the OK button. This once again opens the Add Connection window showing SQL Server 2008 on the machine, Hodentek as shown in the next figure in this case. The connection is set for Windows Authentication and should you test the connectivity you would get 'Success' as a reply. However when you click on the handle for the database name to retrieve a list of databases on this server, you would get a message as shown. Creating a ODBC DSN You will be using the ODBC Data Source Administrator on your desktop to create a ODBC DSN. You access the ODBC Source Administrator from Start | All Programs | Control Panel | Administrative Tools | Data Sources(ODBC). This opens up ODBC Data Source Administrator window as shown in the next figure. Click on System DSN tab and click on the Add... button. This opens up the Create New Data Source window where you scroll down to SQL Server Native Client 10.0. Click on the Finish button. This will bring up the Create a New Data Source to SQL Server window. You must provide a name in the Name box. You also provide a description and click on the drop-down handle for the question, Which SQL Server do you want to connect to? to reveal a number of accessible servers as shown. Highlight SQL Server 2008. Click on the Next button which opens a window where you provide the authentication information. This server uses windows authentication and if your server uses SQL Server authentication you will have to be ready to provide the LoginID and Password. You may accept the default for other configurable options. Click on the Next button which opens a window where you choose the default database to which you want to establish a connection. Click on the Next button which opens a window where you accept the defaults and click on the Finish button. This brings up the final screen, the ODBC Data SQL Server Setup which summarizes the options made as shown. By clicking on the Test Data Source... button you can verify the connectivity. When you click on the OK button you will be taken back to the ODBC Data Source Administrator window where the DSN you created is now added to the list of DSNs on your machine as shown. Retrieving Data from the Server to a Web Page You will be creating an ASP.NET website project. As this version of Visual Studio supports projects in different versions, choose the Net Framework 2.0 as shown. On to the Default.aspx page, drag and drop a GridView control from the Toolkit as shown in this design view. Click on the Smart task handle to reveal the tasks you need to complete this control. Click on the drop-down handle for the Choose Data Source: task as shown in the previous figure. Now click on the <New data Source...> item. This opens the Data Source Configuration Wizard window which displays the various sources from which you may get your data. Click on the Database icon. Now the OK button becomes visible. Click on the OK button. The wizard's next task is to guide you to get the connection information as in the next figure. Click on the New Connection... button. This will take you back to the Add Connection window. Click on the Change... button as shown earlier in the tutorial. In the Change Data Source window, you now highlight the Microsoft ODBC Data Source as shown in the next figure. Click on the OK button. This opens the Add Connection window where you can now point to the ODBC source you created earlier, using the drop-down handle for the Use user or system data source name. You may also test your connection by hitting the Test Connection button. Click on the OK button. This brings the connection information to the wizard's screen as shown in the next figure. Click on the Next button which opens a window in which you have the option to save your connection information to the configuration node of your web.config file. Make sure you read the information on this page. The default connection name has been changed to Conn2k8 as shown. Click on the Next button. This will bring up the screen where you provide a SQL Select statement to retrieve the columns you want. You have three options and here the Specify a custom SQL Statement or stored procedure option is chosen.
Read more
  • 0
  • 0
  • 23962

article-image-chaos-engineering-company-gremlin-launches-scenarios-making-it-easier-to-tackle-downtime-issues
Richard Gall
26 Sep 2019
2 min read
Save for later

Chaos engineering company Gremlin launches Scenarios, making it easier to tackle downtime issues

Richard Gall
26 Sep 2019
2 min read
At the second ChaosConf in San Francisco, Gremlin CEO Kolton Andrus revealed the company's latest step in its war against downtime: 'Scenarios.' Scenarios makes it easy for engineering teams to simulate a common issues that lead to downtime. It's a natural and necessary progression for Gremlin that is seeing even the most forward thinking teams struggling to figure out how to implement chaos engineering in a way that's meaningful to their specific use case. "Since we released Gremlin Free back in February thousands of customers have signed up to get started with chaos engineering," said Andrus. "But many organisations are still struggling to decide which experiments to run in order to avoid downtime and outages." Scenarios, then, is a useful way into chaos engineering for teams that are reticent about taking their first steps. As Andrus notes, it makes it possible to inject failure "with a couple of clicks." What failure scenarios does Scenarios let engineering teams simulate? Scenarios lets Gremlin users simulate common issues that can cause outages. These include: Traffic spikes (think Black Friday site failures) Network failures Region evacuation This provides a great starting point for anyone that wants to stress test their software. Indeed, it's inevitable that these issues will arise at some point so taking advance steps to understand what the consequences could be will minimise their impact - and their likelihood. Why chaos engineering? Over the last couple of years plenty of people have been attempting to answer why chaos engineering? But in truth the reasons are clear: software - indeed, the internet as we know it - is becoming increasingly complex, a mesh of interdependent services and platforms. At the same time, the software being developed today is more critical than ever. For eCommerce sites downtime means money, but for those in IoT and embedded systems world (like self-driving cars, for example), it's sometimes a matter of life and death. This makes Gremlin's Scenarios an incredibly exciting an important prospect - it should end the speculation and debate about whether we should be doing chaos engineering, and instead help the world to simply start doing it. At ChaosConf Andrus said that Gremlin's mission is to build a more reliable internet. We should all hope they can deliver.
Read more
  • 0
  • 0
  • 23526

article-image-getting-started-nginx
Packt
20 Jul 2015
10 min read
Save for later

Getting Started with Nginx

Packt
20 Jul 2015
10 min read
In this article by the author, Valery Kholodkov, of the book, Nginx Essentials, we learn to start digging a bit deeper into Nginx, we will quickly go through most common distributions that contain prebuilt packages for Nginx. Installing Nginx Before you can dive into specific features of Nginx, you need to learn how to install Nginx on your system. It is strongly recommended that you use prebuilt binary packages of Nginx if they are available in your distribution. This ensures best integration of Nginx with your system and reuse of best practices incorporated into the package by the package maintainer. Prebuilt binary packages of Nginx automatically maintain dependencies for you and package maintainers are usually fast to include security patches, so you don't get any complaints from security officers. In addition to that, the package usually provides a distribution-specific startup script, which doesn't come out of the box. Refer to your distribution package directory to find out if you have a prebuilt package for Nginx. Prebuilt Nginx packages can also be found under the download link on the official Nginx.org site. Installing Nginx on Ubuntu The Ubuntu Linux distribution contains a prebuilt package for Nginx. To install it, simply run the following command: $ sudo apt-get install nginx The preceding command will install all the required files on your system, including the logrotate script and service autorun scripts. The following table describes the Nginx installation layout that will be created after running this command as well as the purpose of the selected files and folders: Description Path/Folder Nginx configuration files /etc/nginx Main configuration file /etc/nginx/nginx.conf Virtual hosts configuration files (including default one) /etc/nginx/sites-enabled Custom configuration files /etc/nginx/conf.d Log files (both access and error log) /var/log/nginx Temporary files /var/lib/nginx Default virtual host files /usr/share/nginx/html Default virtual host files will be placed into /usr/share/nginx/html. Please keep in mind that this directory is only for the default virtual host. For deploying your web application, use folders recommended by Filesystem Hierarchy Standard (FHS). Now you can start the Nginx service with the following command: $ sudo service nginx start This will start Nginx on your system. Alternatives The prebuilt Nginx package on Ubuntu has a number of alternatives. Each of them allows you to fine tune the Nginx installation for your system. Installing Nginx on Red Hat Enterprise Linux or CentOS/Scientific Linux Nginx is not provided out of the box in Red Hat Enterprise Linux or CentOS/Scientific Linux. Instead, we will use the Extra Packages for Enterprise Linux (EPEL) repository. EPEL is a repository that is maintained by Red Hat Enterprise Linux maintainers, but contains packages that are not a part of the main distribution for various reasons. You can read more about EPEL at https://fedoraproject.org/wiki/EPEL. To enable EPEL, you need to download and install the repository configuration package: For RHEL or CentOS/SL 7, use the following link: http://download.fedoraproject.org/pub/epel/7/x86_64/repoview/epel-release.html For RHEL/CentOS/SL 6 use the following link: http://download.fedoraproject.org/pub/epel/6/i386/repoview/epel-release.html If you have a newer/older RHEL version, please take a look at the How can I use these extra packages? section in the original EPEL wiki at the following link: https://fedoraproject.org/wiki/EPEL Now that you are ready to install Nginx, use the following command: # yum install nginx The preceding command will install all the required files on your system, including the logrotate script and service autorun scripts. The following table describes the Nginx installation layout that will be created after running this command and the purpose of the selected files and folders: Description Path/Folder Nginx configuration files /etc/nginx Main configuration file /etc/nginx/nginx.conf Virtual hosts configuration files (including default one) /etc/nginx/conf.d Custom configuration files /etc/nginx/conf.d Log files (both access and error log) /var/log/nginx Temporary files /var/lib/nginx Default virtual host files /usr/share/nginx/html Default virtual host files will be placed into /usr/share/nginx/html. Please keep in mind that this directory is only for the default virtual host. For deploying your web application, use folders recommended by FHS. By default, the Nginx service will not autostart on system startup, so let's enable it. Refer to the following table for the commands corresponding to your CentOS version: Function Cent OS 6 Cent OS 7 Enable Nginx startup at system startup chkconfig nginx on systemctl enable nginx Manually start Nginx service nginx start systemctl start nginx Manually stop Nginx service nginx stop systemctl start nginx Installing Nginx from source files Traditionally, Nginx is distributed in the source code. In order to install Nginx from the source code, you need to download and compile the source files on your system. It is not recommended that you install Nginx from the source code. Do this only if you have a good reason, such as the following scenarios: You are a software developer and want to debug or extend Nginx You feel confident enough to maintain your own package A package from your distribution is not good enough for you You want to fine-tune your Nginx binary. In either case, if you are planning to use this way of installing for real use, be prepared to sort out challenges such as dependency maintenance, distribution, and application of security patches. In this section, we will be referring to the configuration script. Configuration script is a shell script similar to one generated by autoconf, which is required to properly configure the Nginx source code before it can be compiled. This configuration script has nothing to do with the Nginx configuration file that we will be discussing later. Downloading the Nginx source files The primary source for Nginx for an English-speaking audience is Nginx.org. Open https://nginx.org/en/download.html in your browser and choose the most recent stable version of Nginx. Download the chosen archive into a directory of your choice (/usr/local or /usr/src are common directories to use for compiling software): $ wget -q http://nginx.org/download/nginx-1.7.9.tar.gz Extract the files from the downloaded archive and change to the directory corresponding to the chosen version of Nginx: $ tar xf nginx-1.7.9.tar.gz$ cd nginx-1.7.9 To configure the source code, we need to run the ./configure script included in the archive: $ ./configurechecking for OS+ Linux 3.13.0-36-generic i686checking for C compiler ... found+ using GNU C compiler[...] This script will produce a lot of output and, if successful, will generate a Makefile file for the source files. Notice that we showed the non-privileged user prompt $ instead of the root # in the previous command lines. You are encouraged to configure and compile software as a regular user and only install as root. This will prevent a lot of problems related to access restriction while working with the source code. Troubleshooting The troubleshooting step, although very simple, has a couple of common pitfalls. The basic installation of Nginx requires the presence of OpenSSL and Perl-compatible Regex (PCRE) developer packages in order to compile. If these packages are not properly installed or not installed in locations where the Nginx configuration script is able to locate them, the configuration step might fail. Then, you have to choose between disabling the affected Nginx built-in modules (rewrite or SSL, installing required packages properly, or pointing the Nginx configuration script to the actual location of those packages if they are installed. Building Nginx You can build the source files now using the following command: $ make You'll see a lot of output on compilation. If build is successful, you can install the Nginx file on your system. Before doing that, make sure you escalate your privileges to the super user so that the installation script can install the necessary files into the system areas and assign necessary privileges. Once successful, run the make install command: # make install The preceding command will install all the necessary files on your system. The following table lists all locations of the Nginx files that will be created after running this command and their purposes: Description Path/Folder Nginx configuration files /usr/local/nginx/conf Main configuration file /usr/local/nginx/conf/nginx.conf Log files (both access and error log) /usr/local/nginx/logs Temporary files /usr/local/nginx Default virtual host files /usr/local/nginx/html Unlike installations from prebuilt packages, installation from source files does not harness Nginx folders for the custom configuration files or virtual host configuration files. The main configuration file is also very simple in its nature. You have to take care of this yourself. Nginx must be ready to use now. To start Nginx, change your working directory to the /usr/local/nginx directory and run the following command: # sbin/nginx This will start Nginx on your system with the default configuration. Troubleshooting This stage works flawlessly most of the time. A problem can occur in the following situations: You are using nonstandard system configuration. Try to play with the options in the configuration script in order to overcome the problem. You compiled in third-party modules and they are out of date or not maintained. Switch off third-party modules that break your build or contact the developer for assistance. Copying the source code configuration from prebuilt packages Occasionally you might want to amend Nginx binary from a prebuilt packages with your own changes. In order to do that you need to reproduce the build tree that was used to compile Nginx binary for the prebuilt package. But how would you know what version of Nginx and what configuration script options were used at the build time? Fortunately, Nginx has a solution for that. Just run the existing Nginx binary with the -V command-line option. Nginx will print the configure-time options. This is shown in the following: $ /usr/sbin/nginx -Vnginx version: nginx/1.4.6 (Ubuntu)built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1)TLS SNI support enabledconfigure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' … Using the output of the preceding command, reproduce the entire build environment, including the Nginx source tree of the corresponding version and modules that were included into the build. Here, the output of the Nginx -V command is trimmed for simplicity. In reality, you will be able to see and copy the entire command line that was passed to the configuration script at the build time. You might even want to reproduce the version of the compiler used in order to produce a binary-identical Nginx executable file (we will discuss this later when discussing how to troubleshoot crashes). Once this is done, run the ./configure script of your Nginx source tree with options from the output of the -V option (with necessary alterations) and follow the remaining steps of the build procedure. You will get an altered Nginx executable on the objs/ folder of the source tree. Summary Here, you learned how to install Nginx from a number of available sources, the structure of Nginx installation and the purpose of various files, the elements and structure of the Nginx configuration file, and how to create a minimal working Nginx configuration file. You also learned about some best practices for Nginx configuration.
Read more
  • 0
  • 0
  • 23443

article-image-deploying-your-applications-websphere-application-server-70-part-1
Packt
30 Sep 2009
10 min read
Save for later

Deploying your Applications on WebSphere Application Server 7.0 (Part 1)

Packt
30 Sep 2009
10 min read
Inside the Application Server Before we look at deploying an application, we will quickly run over the internals of WebSphere Application Server (WAS). The anatomy of WebSphere Application Server is quite detailed, so for now, we will briefly explain the important parts of WebSphere Application Server. The figure below shows the basic architecture model for a WebSphere Application Server JVM. An important thing to remember is that the WebSphere product code base is the same for all operating-systems (platforms). The Java applications that are deployed are written once and can be deployed to all versions of a given WebSphere release without any code changes. JVM All WebSphere Application Servers are essentially Java Virtual Machines (JVMs). IBM has implemented the J2EE application server model in a way which maximizes the J2EE specification and also provides many enhancements creating specific features for WAS. J2EE applications are deployed to an Application Server. Web container A common type of business application is a web application. The WAS web container is essentially a Java-based web server contained within an application server's JVM, which serves the web component of an application to the client browser. Virtual hosts A virtual host is a configuration element which is required for the web container to receive HTTP requests. As in most web server technologies, a single machine may be required to host multiple applications and appear to the outside world as multiple machines. Resources that are associated with a particular virtual host are designed to not share data with resources belonging to another virtual host, even if the virtual hosts share the same physical machine. Each virtual host is given a logical name and assigned one or more DNS aliases by which it is known. A DNS alias is the TCP/ host name and port number that are used to request a web resource, for example: <hostname>:9080/<servlet>. By default, two virtual host aliases are created during installation. One for the administration console called admin_host and another called default_host which is assigned as the default virtual host alias for all application deployments unless overridden during the deployment phase. All web applications must be mapped to a virtual host, otherwise web browser clients cannot access the application that is being served by the web container. Environment settings WebSphere uses Java environment variables to control settings and properties relating to the server environment. WebSphere variables are used to configure product path names, such as the location of a database driver, for example, ORACLE_JDBC_DRIVER_PATH, and environmental values required by internal WebSphere services and/or applications. Resources Configuration data is stored in XML files in the underlying configuration repository of the WebSphere Application Server. Resource definitions are a fundamental part of J2EE administration. Application logic can vary depending on the business requirement and there are several types of resource types that can be used by an application. Below is a list of some of the most commonly used resource types. Resource Types Description JDBC (Java database connectivity) Used to define providers and data sources URL Providers Used to define end-points for external services for example web services... JMS Providers Used to defined messaging configurations for Java Message Service, MQ connection factories and queue destinations etc. Mail Providers Enable applications to send and receive mail, typically use the SMTP protocol. JNDI The Java Naming and Directory Interface (JNDI) is employed to make applications more portable. JNDI is essentially an API for a directory service which allows Java applications to look up data and objects via a name. JNDI is a lookup service where each resource can be given a unique name. Naming operations, such as lookups and binds, are performed on contexts. All naming operations begin with obtaining an initial context. You can view the initial context as a starting point in the namespace. Applications use JNDI lookups to find a resource using a known naming convention. Administrators can override the resource the application is actually connecting to without requiring a reconfiguration or code change in the application. This level of abstraction using JNDI is fundamental and required for the proper use of WebSphere by applications. Application file types There are three file types we work with in Java applications. Two can be installed via the WebSphere deployment process. One is known as an EAR file, and the other is a WAR file. The third is a JAR file (often re-usable common code) which is contained in either the WAR or EAR format. The explanation of these file types is shown in the following table: File Type Description JAR file A JAR file (or Java ARchive) is used for organising many files into one. The actual internal physical layout is much like a ZIP file. A JAR is  generally used to distribute Java classes and associated metadata. In J2EE applications the JAR file often contains utility code, shared libraries and  EJBS. An EJB is a server-side model that encapsulates the business logic of an application and is one of several Java APIs in the Java Platform, Enterprise Edition with its own specification. You can visit http://java.sun.com/products/ejb/ for information on EJBs. EAR file An Enterprise Archive file represents a J2EE application that can be deployed in a WebSphere application server. EAR files are standard Java archive files (JAR) and have the file extension .ear. An EAR file can consist of the following: One or more Web modules packaged in WAR files. One or more EJB modules packaged in JAR files One or more application client modules Additional JAR files required by the application Any combination of the above The modules that make up the EAR file are themselves packaged in archive files specific to their types. For example, a Web module contains Web archive files and an EJB module contains Java archive files. EAR files also contain a deployment descriptor (an XML file called application.xml) that describes the contents of the application and contains instructions for the entire application, such as security settings to be used in the run-time environment... WAR file A WAR file (Web Application) is essentially a JAR file used to encapsulate a collection of JavaServer Pages (JSP), servlets, Java classes, HTML and other related files which may include XML and other file types depending on the web technology used. For information on JSP and Servlets, you can visit http://java.sun.com/products/jsp/. Servlets can support dynamic Web page content; they provide dynamic server-side processing and can connect to databases. Java ServerPages (JSP) files can be used to separate HTML code from the business logic in Web pages. Essentially they too can generate dynamic pages; however, they employ Java beans (classes) which contain specific detailed server-side logic. A WAR file also has its own deployment descriptor called "web.xml" which is used to configure the WAR file and can contain instruction for resource mapping and security. When an EJB module or web module is installed as a standalone application, it is automatically wrapped in an Enterprise Archive (EAR) file by the WebSphere deployment process and is managed on disk by WebSphere as an EAR file structure. So, if a WAR file is deployed, WebSphere will convert it into an EAR file. Deploying an application As WebSphere administrators, we are asked to deploy applications. These applications may be written in-house or delivered by a third-party vendor. Either way, they will most often be provided as an EAR file for deployment into WebSphere. For the purpose of understanding a manual deployment, we are now going to install a default application. The default application can be located in the <was_root>/installableApps folder. The following steps will show how we deploy the EAR file. Open the administration console and navigate to the Applications section and click on New Application as shown below: You now see the option to create one of the following three types of applications: Application Type Description Enterprise Application EAR file on a server configured to hold installable Web Applications, (WAR), Java archives, library files, and other resource files. Business Level Application A business-level application is an administration model similar to a server or cluster. However, it lends itself to the configuration of applications as a single grouping of modules. Asset An asset represents one or more application binary files that are stored in an asset repository such as Java archives, library files, and other resource files. Assets can be shared between applications. Click on New Enterprise Application. As seen in the following screenshot, you will be presented with the option to either browse locally on your machine for the file or remotely on the Application Server's file system. Since the EAR file we wish to install is on the server, we will choose the Remote file system option. It can sometimes be quicker to deploy large applications by first using Secure File Transfer Protocol (SFTP) to move the file to the application server's file system and then using remote, as opposed to transferring via local browse, which will do an HTTP file transfer which takes more resources and can be slower. The following screenshot depicts the path to the new application: Click Browse.... You will see the name of the application server node. If there is more than one profile, select the appropriate instance. You will then be able to navigate through a web-based version of the Linux file system as seen in the following screenshot: Locate the DefaultApplication.ear file. It will be in a folder called installableApps located in the root WebSphere install folder, for example, <was_root>/installableApps as shown in the previous screenshot. Click Next to begin installing the EAR file. On the Preparing for the application installation page, choose the Fast Path option. There are two options to choose. Install option Description Fast Path The deployment wizard will skip advanced settings and only prompt for the absolute minimum settings required for the deployment. Detailed The wizard will allow, at each stage of the installation, for the user to override any of the J2EE properties and configurations available to an EAR file. The Choose to generate default bindings and mappings setting allows the user to accept the default settings for resource mappings or override with specific values. Resource mappings will exist depending on the complexity of the EAR. Bindings are JNDI to resource mappings. Each EAR file has pre-configured XML descriptors which specify the JNDI name that the application resource uses to map to a matching (application server) provided resource. An example would be a JDBC data source name which is referred to as jdbc/mydatasource, whereas the actual data source created in the application server might be called jdbc/datasource1. By choosing the Detailed option, you get prompted by the wizard to decide on how you want to map the resource bindings. By choosing the Fast Path option, you are allowing the application to use its pre-configured default JNDI names. We will select Fast Path as demonstrated in the following screenshot: Click on Next. In the next screen, we are given the ability to fill out some specific deployment options. Below is a list of the options presented in this page.
Read more
  • 0
  • 0
  • 22813
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-building-portable-minecraft-server-lan-parties-park
Andrew Fisher
01 Jun 2015
14 min read
Save for later

Building a portable Minecraft server for LAN parties in the park

Andrew Fisher
01 Jun 2015
14 min read
Minecraft is a lot of fun, especially when you play with friends. Minecraft servers are great but they aren’t very portable and rely on a good Internet connection. What about if you could take your own portable server with you - say to the park - and it will fit inside a lunchbox? This post is about doing just that, building a small, portable minecraft server that you can use to host pop up crafting sessions no matter where you are when the mood strikes.   where shell instructions are provided in this document, they are presented assuming you have relevant permissions to execute them. If you run into permission denied errors then execute using sudo or switch user to elevate your permissions. Bill of Materials The following components are needed. Item QTY Notes Raspberry Pi 2 Model B 1 Older version will be too laggy. Get a new one 4GB MicroSD card with raspbian installed on it 1 The faster the class of SD card the better WiPi wireless USB dongle 1 Note that “cheap” USB dongles often won’t support hostmode so can’t create a network access point. The “official” ones cost more but are known to work. USB Powerbank 1 Make sure it’s designed for charging tablets (ie 2.1A) and the higher capacity the better (5000mAh or better is good). Prerequisites I am assuming you’ve done the following with regards to getting your Raspberry Pi operational. Latest Raspbian is installed and is up to date - run ‘apt-get update && apt-get upgrade’ if unsure. Using raspi-config you have set the correct timezone for your location and you have expanded the file system to fill the SD card. You have wireless configured and you can access your network using wpa_supplicant You’ve configured the Pi to be accessible over SSH and you have a client that allows you do this (eg ssh, putty etc). Setup I’ll break the setup into a few parts, each focussing on one aspect of what we’re trying to do. These are: Getting the base dependencies you need to install everything on the RPi Installing and configuring a minecraft server that will run on the Pi Configuring the wireless access point. Automating everything to happen at boot time. Configure your Raspberry Pi Before running minecraft server on the RPi you will need a few additional packaged than you have probably installed by default. From a command line install the following: sudo apt-get install oracle-java8-jdk git avahi-daemon avahi-utils hostapd dnsmasq screen Java is required for minecraft and building the minecraft packages git will allow you to install various source packages avahi (also known as ZeroConf) will allow other machines to talk to your machine by name rather than IP address (which means you can connect to minecraft.local rather than 192.168.0.1 or similar). dnsmasq allows you to run a DNS server and assign IP addresses to the machines that connect to your minecraft box hostapd uses your wifi module to create a wireless access point (so you can play minecraft in a tent with your friends). Now you have the various components we need, it’s time to start building your minecraft server. Download the script repo To make this as fast as possible I’ve created a repository on Github that has all of the config files in it. Download this using the following commands: mkdir ~/tmp cd ~/tmp git clone https://gist.github.com/f61c89733340cd5351a4.git This will place a folder called ‘mc-config’ inside your ~/tmp directory. Everything will be referenced from there. Get a spigot build It is possible to run Minecraft using the Vanilla Minecraft server however it’s a little laggy. Spigot is a fork of CraftBukkit that seems to be a bit more performance oriented and a lot more stable. Using the Vanilla minecraft server I was experiencing lots of lag issues and crashes, with Spigot these disappeared entirely. The challenge with Spigot is you have to build the server from scratch as it can’t be distributed. This takes a little while on an RPi but is mostly automated. Run the following commands. mkdir ~/tmp/mc-build cd ~/tmp/mc-build wget https://hub.spigotmc.org/jenkins/job/BuildTools/lastSuccessfulBuild/artifact/target/BuildTools.jar java -jar BuildTools.jar --rev 1.8 If you have a dev environment setup on your computer you can do this step locally and it will be a lot faster. The key thing is at the end to put the spigot-1.8.jar and the craftbukkit-1.8.jar files on the RPi in the ~/tmp/mc-build/ directory. You can do this with scp.  Now wait a while. If you’re impatient, open up another SSH connection to your server and configure your access point while the build process is happening. //time passes After about 45 minutes, you should have your own spigot build. Time to configure that with the following commands: cd ~/tmp/mc-config ./configuremc.sh This will run a helper script which will then setup some baseline properties for your server and some plugins to help it be more stable. It will also move the server files to the right spot, configure a minecraft user and set minecraft to run as a service when you boot up. Once that is complete, you can start your server. service minecraft-server start The first time you do this it will take a long time as it has to build out the whole world, create config files and do all sorts of set up things. After this is done the first time however, it will usually only take 10 to 20 seconds to start after this. Administer your server We are using a tool called “screen” to run our minecraft server. Screen is a useful utility that allows you to create a shell and then execute things within it and just connect and detach from it as you please. This is a really handy utility say when you are running something for a long time and you want to detach your ssh session and reconnect to it later - perhaps you have a flaky connection. When the minecraft service starts up it creates a new screen session, and gives it the name “minecraft_server” and runs the spigot server command. The nice thing with this is that once the spigot server stops, the screen will close too. Now if you want to connect to your minecraft server the way you do it is like this: sudo screen -r minecraft_server To leave your server running just hit <CRTL+a> then hit the “d” key. CTRL+A sends an “action” and then “d” sends “detach”. You can keep resuming and detaching like this as much as you like. To stop the server you can do it two ways. The first is to do it manually once you’ve connected to the screen session then type “stop”. This is good as it means you can watch the minecraft server come down and ensure there’s no errors. Alternatively just type: service minecraft-server stop And this actually simulates doing exactly the same thing. Figure: Spigot server command line Connect to your server Once you’ve got your minecraft server running, attempt to connect to it from your normal computer using multiplayer and then direct connect. The machine address will be minecraft.local (unless you changed it to something else). Figure: Server selection Now you have the minecraft server complete you can simply ssh in, run ‘service minecraft-server start’ and your server will come up for you and your friends to play. The next sections will get you portable and automated. Setting up the WiFi host The way I’m going to show you to set up the Raspberry Pi is a little different than other tutorials you’ll see. The objective is that if the RPi can discover an SSID that it is allowed to join (eg your home network) then it should join that. If the RPi can’t discover a network it knows, then it should create it’s own and allow other machines to join it. This means that when you have your minecraft server sitting in your kitchen you can update the OS, download new mods and use it on your existing network. When you take it to the park to have a sunny Crafting session with your friends, the server will create it’s own network that you can all jump onto. Turn off auto wlan management By default the OS will try and do the “right thing” when you plug in a network interface so when you go to create your access point, it will actually try and put it on the wireless network again - not the desired result. To change this make the following modifications to /etc/default/ifplugd change the lines: INTERFACES="all" HOTPLUG_INTERFACES="ALL" to: INTERFACES="eth0" HOTPLUG_INTERFACES="eth0" Configure hostapd Now, stop hostapd and dnsmasq from running at boot. They should only come up when needed so the following commands will make them manual. update-rc.d -f hostapd remove update-rc.d -f dnsmasq remove Next, modify the hostapd daemon file to read the hostapd config from a file. Change the /etc/default/hostapd file to have the line: DAEMON_CONF="/etc/hostapd/hostapd.conf" Now create the /etc/hostapd/hostapd.conf file using this command using the one from the repo. cd ~/tmp/mc-config cp hostapd.conf /etc/hostapd/hostapd.conf If you look at this file you can see we’ve set the SSID of the access point to be “minecraft”, the password to be “qwertyuiop” and it has been set to use wlan0. If you want to change any of these things, feel free to do it. Now you'll probably want to kill your wireless device with ifdown wlan0 Make sure all your other processes are finished if you’re doing this and compiling spigot at the same time (or make sure you’re connected via the wired ethernet as well). Now run hostapd to just check any config errors. hostapd -d /etc/hostapd/hostapd.conf If there are no errors, then background the task (ctrl + z then type ‘bg 1’) and look at ifconfig. You should now have the wlan0 interface up as well as a wlan0.mon interface. If this is all good then you know your config is working for hostapd. Foreground the task (‘fg 1’) and stop the hostapd process (ctrl + c). Configure dnsmasq Now to get dnsmasq running - this is pretty easy. Download the dnsmasq example and put it in the right place using this command: cd ~/tmp/mc-config mv /etc/dnsmasq.conf /etc/dnsmasq.conf.backup cp dnsmasq.conf /etc/dnsmasq.conf dnsmasq is set to listen on wlan0 and allocate IP addresses in the range 192.168.40.5 - 192.168.40.40. The default IP address of the server will be 192.168.40.1. That's pretty much all you really need to get dnsmasq configured. Testing the config Now it’s time to test all this configuration. It is probably useful to have your RPi connected to a keyboard and monitor or available over eth0 as this is the most likely point where things may need debugging. The following commands will bring down wlan0, start hostapd, configure your wlan0 interface to a static IP address then start up dnsmasq. ifdown wlan0 service hostapd start ifconfig wlan0 192.168.40.1 service dnsmasq start Assuming you had no errors, you can now connect to your wireless access point from your laptop or phone using the SSID “minecraft” and password “qwertyuiop”. Once you are connected you should be given an IP address and you should be able to ping 192.168.40.1 and shell onto minecraft.local. Congratulations - you’ve now got a Raspberry Pi Wireless Access Point. As an aside, were you to write some iptables rules, you could now route traffic from the wlan0 interface to the eth0 interface and out onto the rest of your wired network - thus turning your RPi into a router. Running everything at boot The final setup task is to make the wifi detection happen at boot time. Most flavours of linux have a boot script called rc.local which is pretty much the final thing to run before giving you your login prompt at a terminal. Download the rc.local file using the following commands. mv /etc/rc.local /etc/rc.local.backup cp rc.local /etc/rc.local chmod a+x /etc/rc.local This script checks to see if the RPi came up on the network. If not it will wait a couple of seconds, then start setting up hostapd and dnsmasq. To test this is all working, modify your /etc/wpa_supplicant/wpa_supplicant.conf file and change the SSID so it’s clearly incorrect. For example if your SSID was “home” change it to “home2”. This way when the RPi boots it won’t find it and the access point will be created. Park Crafting Now you have your RPi minecraft server and it can detect networks and make good choices about when to create it’s own network, the next thing you need to do it make it portable. The new version 2 RPi is more energy efficient, though running both minecraft and a wifi access point is going to use some power. The easiest way to go mobile is to get a high capacity USB powerbank that is designed for charging tablets. They can be expensive but a large capacity one will keep you going for hours. This powerbank is 5000mAh and can deliver 2.1amps over it’s USB connection. Plenty for several hours crafting in the park. Setting this up couldn’t be easier, plug a usb cable into the powerbank and then into the RPi. When you’re done, simply plug the powerbank into your USB charger or computer and charge it up again. If you want something a little more custom then 7.4v lipo batteries with a step down voltage regulator (such as this one from Pololu: https://www.pololu.com/product/2850) connected to the power and ground pins on the RPi works very well. The challenge here is charging the LiPo again, however if you have the means to balance charge lipos then this will probably be a very cheap option. If you connect more than 5V to the RPi you WILL destroy it. There are no protection circuits when you use the GPIO pins. To protect your setup simply put it in a little plastic container like a lunch box. This is how mine travels with me. A plastic lunchbox will protect your server from spilled drinks, enthusiastic dogs and toddlers attracted to the blinking lights. Take your RPi and power supply to the park, along with your laptop and some friends then create your own little wifi hotspot and play some minecraft together in the sun. Going further If you want to take things a little further, here are some ideas: Build a custom enclosure - maybe something that looks like your favorite minecraft block. Using WiringPi (C) or RPI.GPIO (Python), attach an LCD screen that shows the status of your server and who is connected to the minecraft world. Add some custom LEDs to your enclosure then using RaspberryJuice and the python interface to the API, create objects in your gameworld that can switch the LEDs on and off. Add a big red button to your enclosure that when pressed will “nuke” the game world, removing all blocks and leaving only water. Go a little easier and simply use the button to kick everyone off the server when it’s time to go or your battery is getting low. About the Author Andrew Fisher is a creator and destroyer of things that combine mobile web, ubicomp and lots of data. He is a sometime programmer, interaction researcher and CTO at JBA, a data consultancy in Melbourne, Australia. He can be found on Twitter @ajfisher.
Read more
  • 0
  • 0
  • 22734

article-image-linux-4-19-kernel-releases-with-open-arms-and-aio-based-polling-interface-linus-back-to-managing-the-linux-kernel
Natasha Mathur
22 Oct 2018
4 min read
Save for later

Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel

Natasha Mathur
22 Oct 2018
4 min read
It was last month when Linus Torvalds took a break from kernel development. During his break, he had assigned Greg Kroah-Hartman as Linux's temporary leader, who went ahead and released the Linux 4.19 today at the ongoing Linux Foundation Open Source Summit in Edinburg, after eight release candidates. The new release includes features such as new AIO-based polling interface, L1TF vulnerability mitigations, the block I/O latency controller, time-based packet transmission, and the CAKE queuing discipline, among other minor changes. The Linux 4.19 kernel release announcement is slightly different and longer than usual as apart from mentioning major changes, it also talks about welcoming newcomers by helping them learn things with ease. “By providing a document in the kernel source tree that shows that all people, developers, and maintainers alike, will be treated with respect and dignity while working together, we help to create a more welcome community to those newcomers, which our very future depends on if we all wish to see this project succeed at its goals”, mentions Hartman. Moreover, Hartman also welcomed Linus back into the game as he wrote, “And with that, Linus, I'm handing the kernel tree back to you.  You can have the joy of dealing with the merge window”. Let’s discuss the features in Linux 4.19 Kernel. AIO-based polling interface A new polling API based on the asynchronous I/O (AIO) mechanism was posted by Christoph Hellwig, earlier this year.  AIO enables submission of I/O operations without waiting for their completion. Polling is a natural addition to AIO and point of polling is to avoid waiting for operations to get completed. Linux 4.19 kernel release comes with AIO poll operations that operate in the "one-shot" mode. So, once a poll notification gets generated, a new IOCB_CMD_POLL IOCB is submitted for that file descriptor. To provide support for AIO-based polling, two functions, namely,  poll() method in struct file_operations:  int (*poll) (struct file *file, struct poll_table_struct *table) (supports the polling system calls in previous kernels), are split into separate file_operations methods. Hence, it then adds these two new entries to that structure:    struct wait_queue_head *(*get_poll_head)(struct file *file, int mask);    int (*poll_mask) (struct file *file, int mask); L1 terminal fault vulnerability mitigations The Meltdown CPU vulnerability was first disclosed earlier this year and allowed unprivileged attackers to easily read the arbitrary memory in systems. Then, "L1 terminal fault" (L1TF) vulnerability (also going by the name Foreshadow) was disclosed which brought back both threats, namely, easy attacks against host memory from inside a guest. Mitigations are available in Linux 4.19 kernel and have been merged into the mainline kernel. However, they can be expensive for some users. The block I/O latency controller Large data centers make use of control groups that help them balance the use of the available computing resources among competing users. Block I/O bandwidth can be considered .as one of the most important resources for specific types of workloads. However, kernel's I/O controller was not a complete solution to the problem. This is where block I/O latency controller comes into the picture. Linux 4.19 kernel has a block I/O latency controller now.  It regulates latency (instead of bandwidth) at a relatively low level in the block layer. When in use, each control group directory comprises an io.latency file that sets the parameters for that group. A line is written to that file following this pattern: major:minor target=target-time Here major and minor are used to identify the specific block device of interest. Target-time is the maximum latency that this group should be experiencing (in milliseconds). Time-based packet transmission The time-based packet transmission comes with a new socket option, and a new qdisc, which is designed so that it can buffer the packets until a configurable time before their deadline (tx times). Packets intended for timed transmission should be sent with sendmsg(), with a control-message header (of type SCM_TXTIME) which indicates the transmission deadline as a 64-bit nanoseconds value. CAKE queuing discipline “Common Applications Kept Enhanced" (CAKE) queuing discipline in Linux 4.19 exists between the higher-level protocol code and the network interface. It decides which packets need to be dispatched at any given time. It also comprises four different components that are designed to make things work on home links. It prevents the overfilling of buffers along with improving various aspects of networking performance such as bufferbloat reduction and queue management. For more information, check out the official announcement. The kernel community attempting to make Linux more secure KUnit: A new unit testing framework for Linux Kernel Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux
Read more
  • 0
  • 0
  • 18787

article-image-acting-proxy-httpproxymodule
Packt
24 Dec 2013
9 min read
Save for later

Acting as a proxy (HttpProxyModule)

Packt
24 Dec 2013
9 min read
(For more resources related to this topic, see here.) The HttpProxyModule allows Nginx to act as a proxy and pass requests to another server. location / {   proxy_pass        http://app.localhost:8000; } Note when using the HttpProxyModule (or even when using FastCGI), the entire client request will be buffered in Nginx before being passed on to the proxy server. Explaining directives Some of the important directives of the HttpProxyModule are as follows. proxy_pass The proxy_pass directive sets the address of the proxy server and the URI to which the location will be mapped. The address may be given as a hostname or an address and port, for example: proxy_pass http://localhost:8000/uri/; Or, the address may be given as an UNIX socket path: proxy_pass http://unix:/path/to/backend.socket:/uri/; path is given after the word unix between two colons. You can use the proxy_pass directive to forward headers from the client request to the proxied server. proxy_set_header Host $host; While passing requests, Nginx replaces the location in the URI with the location specified by the proxy_pass directive. If inside the proxied location, URI is changed by the rewrite directive and this configuration will be used to process the request. For example: location  /name/ {   rewrite      /name/([^/] +)  /users?name=$1  break;   proxy_pass   http://127.0.0.1; } A request URI is passed to the proxy server after normalization as follows: Double slashes are replaced by a single slash Any references to current directory like "./" are removed Any references to the previous directory like "../" are removed. If proxy_pass is specified without a URI (for example in "http://example.com/request",/request is the URI part), the request URI is passed to the server in the same form as sent by a client location /some/path/ {   proxy_pass http://127.0.0.1; } If you need the proxy connection to an upstream server group to use SSL, your proxy_pass rule should use https:// and you will also have to set your SSL port explicitly in the upstream definition. For example: upstream https-backend {   server 10.220.129.20:443; }   server {   listen 10.220.129.1:443;   location / {     proxy_pass https://backend-secure;   } } proxy_pass_header The proxy_pass_header directive allows transferring header lines forbidden for response. For example: location / {   proxy_pass_header X-Accel-Redirect; } proxy_connect_timeout The proxy_connect_timeout directive sets a connection timeout to the upstream server. You can't set this timeout value to be more than 75 seconds. Please remember that this is not the response timeout, but only a connection timeout. This is not the time until the server returns the pages which is configured through proxy_read_timeout directive. If your upstream server is up but hanging, this statement will not help as the connection to the server has been made. proxy_next_upstream The proxy_next_upstream directive determines in which cases the request will be transmitted to the next server: error: An error occurred while connecting to the server, sending a request to it, or reading its response timeout: The timeout occurred during the connection with the server, transferring the request, or while reading the response from the server invalid_header: The server returned an empty or incorrect response http_500: The server responded with code 500 http_502: The server responded with code 502 http_503: The server responded with code 503 http_504: The server responded with code 504 http_404: The server responded with code 404 off: Disables request forwarding Transferring the request to the next server is only possible if there is an error sending the request to one of the servers. If the request sending was interrupted due to an error or some other reason, the transfer of request will not take place. proxy_redirect The proxy_redirect directive allows you to manipulate the HTTP redirection by replacing the text in the response from the upstream server. Specifically, it replaces text in the Location and Refresh headers. The HTTP Location header field is returned in response from a proxied server for the following reasons: To indicate that a resource has moved temporarily or permanently. To provide information about the location of a newly created resource. This could be the result of an HTTP PUT. Let us suppose that the proxied server returned the following: Location: http://localhost:8080/images/new_folder If you have the proxy_redirect directive set to the following: proxy_redirect http://localhost:8080/images/ http://xyz/; The Location text will be rewritten to be similar to the following: Location: http://xyz/new_folder/. It is possible to use some variables in the redirected address: proxy_redirect http://localhost:8000/ http://$location:8000; You can also use regular expressions in this directive: proxy_redirect ~^(http://[^:]+):d+(/.+)$ $1$2; The value off disables all the proxy_redirect directives at its level. proxy_redirect off; proxy_set_header The proxy_set_header directive allows you to redefine and add new HTTP headers to the request sent to the proxied server. You can use a combination of static text and variables as the value of the proxy_set_header directive. By default, the following two headers will be redefined: proxy_set_header Host $proxy_host; proxy_set_header Connection Close; You can forward the original Host header value to the server as follows: proxy_set_header Host $http_host; However, if this header is absent in the client request, nothing will be transferred. It is better to use the variable $host; its value is equal to the request header Host or to the basic name of the server in case the header is absent from the client request. proxy_set_header Host $host; You can transmit the name of the server together with the port of the proxied server: proxy_set_header Host $host:$proxy_port; If you set the value to an empty string, the header is not passed to the upstream proxied server. For example, if you want to disable the gzip compression on upstream, you can do the following: proxy_set_header  Accept-Encoding  ""; proxy_store The proxy_store directive sets the path in which upstream files are stored, with paths corresponding to the directives alias or root. The off directive value disables local file storage. Please note that proxy_store is different from proxy_cache. It is just a method to store proxied files on disk. It may be used to construct cache-like setups (usually involving error_page-based fallback). This proxy_store directive parameter is off by default. The value can contain a mix of static strings and variables. proxy_store   /data/www$uri; The modification date of the file will be set to the value of the Last-Modified header in the response. A response is first written to a temporary file in the path specified by proxy_temp_path and then renamed. It is recommended to keep this location path and the path to store files the same to make sure it is a simple renaming instead of creating two copies of the file. Example: location /images/ {   root                 /data/www;   error_page           404 = @fetch; }   location /fetch {   internal;   proxy_pass           http://backend;   proxy_store          on;   proxy_store_access   user:rw  group:rw  all:r;   proxy_temp_path      /data/temp;   alias                /data/www; } In this example, proxy_store_access defines the access rights of the created file. In the case of an error 404, the fetch internal location proxies to a remote server and stores the local copies in the /data/temp folder. proxy_cache The proxy_cache directive either turns off caching when you use the value off or sets the name of the cache. This name can then be used subsequently in other places as well. Let's look at the following example to enable caching on the Nginx server: http {   proxy_cache_path  /var/www/cache levels=1:2 keys_zone=my-     cache:8m max_size=1000m inactive=600m;   proxy_temp_path /var/www/cache/tmp;   server {     location / {       proxy_pass http://example.net;       proxy_cache my-cache;       proxy_cache_valid  200 302  60m;       proxy_cache_valid  404      1m;     }   } } The previous example creates a named cache called my-cache. It sets up the validity of the cache for response codes 200 and 302 to 60m, and for 404 to 1m, respectively. The cached data is stored in the /var/www/cache folder. The levels parameter sets the number of subdirectory levels in the cache. You can define up to three levels. The name of key_zone is followed by an inactive interval. All the inactive items in my-cache will be purged after 600m. The default value for inactive intervals is 10 minutes.   Chapter 5 of the book, Creating Your Own Module, is inspired by the work from Mr. Evan Miller which can be found at http://www.evanmiller.org/nginx-modules-guide.html.   Summary In this article we looked at several standard HTTP modules. These modules provide a very rich set of functionalities by default. You can disable these modules if you please at the time of configuration. However, they will be installed by default if you don't. The list of modules and their directives in this chapter is by no means exhaustive. Nginx's online documentation can provide you with more details. Resources for Article: Introduction to nginx [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 2
  • 18563

article-image-choosing-right-flavor-debian-simple
Packt
08 Oct 2013
7 min read
Save for later

Choosing the right flavor of Debian (Simple)

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) Getting ready At any point in time, Debian has three different branches available for use: stable, testing, and unstable. Think of unstable as the cutting edge of free software; it has reasonably modern software packages, and sometimes those packages introduce changes or features that may break the user experience. After an amount of time has passed (usually 10 days, but it depends on the package's upload priority), the new software is considered to be relatively safe to use and is moved to testing. Testing can provide a good balance between modern software and relatively reliable software. Testing goes through several iterations during the course of several years, and eventually it's frozen for a new stable release. This stable release is supported by the Debian Project for a number of years, including feature and security updates. Chances are you are building something that has an interesting team of people to back it up. In such scenarios, web development teams have chosen to go with testing, or even unstable, in order to get the latest software available. In other cases, conservative teams or groups with less savvy staff have resorted to stable because it's consistent for years. It is up to you to choose between any, but this book will get you started with stable. You can change your Advanced Packaging Tool (APT ) configuration later and upgrade to testing and unstable, but the initial installation media that we will use will be stable. Also, it is important that developers target the production environment as closely as possible. If you use stable for production, using stable for development will save a lot of time debugging mismatches. You should know which versions of programming languages, modules, libraries, frameworks, and databases your application will be targeting, as this will influence the selection of your branch. You can go to packages.debian.org to check the versions available for a specific package across different branches. Choosing testing (outside a freeze period) and unstable will also mean that you'll need to have an upgrade strategy where you continuously check for new updates (with tools such as cron-apt) and install them if you want to take advantage of new bug fixes and so on. How to do it… Debian offers a plethora of installation methods for the operating system. From standard CDs and DVDs, Debian also offers reduced-size installation media, bootable USB images, network boot, and other methods. The complexity of installation is a relative factor that usually is of no concern for DevOps since installation only happens once, while configuration and administration are continuously happening. Before you start considering replication methods (such as precooked images, network distribution, configuration management, and software delivery), you and your team can choose from the following installation methods: If you are installing Debian on a third-party provider (such as a cloud vendor), they will either provide a Debian image for you, or you can prepare your own in virtualization software and upload the disk later. If you are installing on your own hardware (including virtualized environments), it's advisable to get either the netinst ISO or the full first DVD ISO. It all depends on whether you are installing several servers over the course of several months (thus making the DVD obsolete as new updates come out) or have a good Internet connection (or proxies and caching facilities, nearby CDNs, and so on) for downloading any additional packages that the netinst disk might not contain. In general, if you are only deploying a handful of servers and have a good Internet connection at hand, I'd suggest you choose the amd64 netinst ISO, which we will use in this book. There's more… There are several other points that you need to consider while choosing the right flavor of Debian. One of them is the architecture you're using and targeting for development. Architectures There are tens of computer architectures available in the market. ARM, Intel, AMD, SPARC, and Alpha are all different types of architectures. Debian uses the architecture codenames i386 and amd64 for historical reasons. i386 actually means an Intel or Intel-compatible, 32-bit processor (x86), while amd64 means an Intel or Intel-compatible, 64-bit processor (x86_64). The brand of the processor is irrelevant. A few years ago, choosing between the two was tricky as some binary-only, non-free libraries and software were not always available for 64-bit processors, and architecture mismatches happened. While there were workarounds (such as running a 32-bit-only software using special libraries), it was basically a matter of time until popular software such as Flash caught up with 64-bit versions—thus, the concern was mainly about laptops and desktops. Nowadays, if your CPU (and/or your hypervisor) has 64-bit capabilities (most Intel do), it's considered a good practice to use the amd64 architecture. We will use amd64 in this book. And since Debian 7.0, the multiarch feature has been included, allowing more than one architecture to be installed and be active on the same hardware. While the market seems to settle around 64-bit Intel processors, the choice of an architecture is still important because it determines the future availability of software that you can choose from Debian. There might be some software that is not compiled for or not compatible with your specific architecture, but there is software that is independent of the architecture. DevOps are usually pragmatic when it comes to choosing architectures, so the following two questions aim to help you understand what to expect when it comes to it: Will you run your web applications on your own hardware? If so, do you already have this hardware or will you procure it? If you need to procure hardware, take a look at the existing server hardware in your datacenter. Factors such as a preferred vendor, hardware standardization, and so on are all important when choosing the right architecture. From the most popular 32- or 64-bit Intel and AMD processors, the growing ARM ecosystem, and also the more venerable but declining SPARC or Itanium, Debian is available for lots of architectures. If you are out in the market for new hardware, your options are most likely based on an Intel- or AMD-compatible, 32- or 64-bit, server-grade processor. Your decisions will be influenced by factors such as the I/O capacity (throughput and speed), memory, disk, and so on, and the architecture will most likely be covered by Debian. Will you run your web applications on third-party hardware, such as a Virtual Private Server (VPS ) provider or a cloud Infrastructure as a Service (IaaS ) provider? Most providers will provide you with prebuilt images for Debian. They are either 32- or 64-bit, x86 images that have some sort of community support—but, be aware they might have no vendor support, or in some cases waive warranties and/or other factors such as the SLA. You should be able to prepare your own Debian installation using virtualization software (such as KVM, VirtualBox, or Hyper-V) and then upload the virtual disk (VHD, VDI, and so on) to your provider. Summary In this article, we learned about selecting the right flavor of Debian for our system. We also learned about the different architectures available in the market that we can use for Debian. Resources for Article : Further resources on this subject: Installation of OpenSIPS 1.6 [Article] Installing and customizing Redmine [Article] Installing and Using Openfire [Article]
Read more
  • 0
  • 0
  • 17206
article-image-configuring-apache-and-nginx
Packt
19 Jul 2010
8 min read
Save for later

Configuring Apache and Nginx

Packt
19 Jul 2010
8 min read
(For more resources on Nginx, see here.) There are basically two main parts involved in the configuration, one relating to Apache and one relating to Nginx. Note that while we have chosen to describe the process for Apache in particular, this method can be applied to any other HTTP server. The only point that differs is the exact configuration sections and directives that you will have to edit. Otherwise, the principle of reverse-proxy can be applied, regardless of the server software you are using. Reconfiguring Apache There are two main aspects of your Apache configuration that will need to be edited in order to allow both Apache and Nginx to work together at the same time. But let us first clarify where we are coming from, and what we are going towards. Configuration overview At this point, you probably have the following architecture set up on your server: A web server application running on port 80, such as Apache A dynamic server-side script processing application such as PHP, communicating with your web server via CGI, FastCGI, or as a server module The new configuration that we are going towards will resemble the following: Nginx running on port 80 Apache or another web server running on a different port, accepting requests coming from local sockets only The script processing application configuration will remain unchanged As you can tell, only two main configuration changes will be applied to Apache as well as the other web server that you are running. Firstly, change the port number in order to avoid conflicts with Nginx, which will then be running as the frontend server. Secondly, (although this is optional) you may want to disallow requests coming from the outside and only allow requests forwarded by Nginx. Both configuration steps are detailed in the next sections. Resetting the port number Depending on how your web server was set up (manual build, automatic configuration from server panel managers such as cPanel, Plesk, and so on) you may find yourself with a lot of configuration files to edit. The main configuration file is often found in /etc/httpd/conf/ or /etc/apache2/, and there might be more depending on how your configuration is structured. Some server panel managers create extra configuration files for each virtual host. There are three main elements you need to replace in your Apache configuration: The Listen directive is set to listen on port 80 by default. You will have to replace that port by another such as 8080. This directive is usually found in the main configuration file. You must make sure that the following configuration directive is present in the main configuration file: NameVirtualHost A.B.C.D:8080, where A.B.C.D is the IP address of the main network interface on which server communications go through. The port you just selected needs to be reported in all your virtual host configuration sections, as described below. The virtual host sections must be transformed from the following template <VirtualHost A.B.C.D:80> ServerName example.com ServerAlias www.example.com [...]</VirtualHost> to the following: <VirtualHost A.B.C.D:8080> ServerName example.com:8080 ServerAlias www.example.com [...]</VirtualHost> In this example, A.B.C.D is the IP address of the virtual host and example.com is the virtual host's name. The port must be edited on the first two lines. Accepting local requests only There are many ways you can restrict Apache to accept only local requests, denying access to the outside world. But first, why would you want to do that? As an extra layer positioned between the client and Apache, Nginx provides a certain comfort in terms of security. Visitors no longer have direct access to Apache, which decreases the potential risk regarding all security issues the web server may have. Globally, it's not necessarily a bad idea to only allow access to your frontend server. The first method consists of changing the listening network interface in the main configuration file. The Listen directive of Apache lets you specify a port, but also an IP address, although, by default, no IP address is selected resulting in communications coming from all interfaces. All you have to do is replace the Listen 8080 directive by Listen 127.0.0.1:8080; Apache should then only listen on the local IP address. If you do not host Apache on the same server, you will need to specify the IP address of the network interface that can communicate with the server hosting Nginx. The second alternative is to establish per-virtual-host restrictions: <VirtualHost A.B.C.D:8080> ServerName example.com:8080 ServerAlias www.example.com [...] Order deny,allow allow from 127.0.0.1 allow from 192.168.0.1 eny all</VirtualHost> Using the allow and deny Apache directives, you are able to restrict the allowed IP addresses accessing your virtual hosts. This allows for a finer configuration, which can be useful in case some of your websites cannot be fully served by Nginx. Once all your changes are done, don't forget to reload the server to make sure the new configuration is applied, such as service httpd reload or /etc/init.d/ httpd reload. Configuring Nginx There are only a couple of simple steps to establish a working configuration of Nginx, although it can be tweaked more accurately as seen in the next section. Enabling proxy options The first step is to enable proxying of requests from your location blocks. Since the proxy_pass directive cannot be placed at the http or server level, you need to include it in every single place that you want to be forwarded. Usually, a location / { fallback block suffices since it encompasses all requests, except those that match location blocks containing a break statement. Here is a simple example using a single static backend hosted on the same server: server { server_name .example.com; root /home/example.com/www; [...] location / { proxy_pass http://127.0.0.1:8080; }} In the following example, we make use of an Upstream block allowing us to specify multiple servers: upstream apache { server 192.168.0.1:80; server 192.168.0.2:80; server 192.168.0.3:80 weight=2; server 192.168.0.4:80 backup;} server { server_name .example.com; root /home/example.com/www; [...] location / { proxy_pass http://apache; }} So far, with such a configuration, all requests are proxied to the backend server; we are now going to separate the content into two categories: Dynamic files: Files that require processing before being sent to the client, such as PHP, Perl, and Ruby scripts, will be served by Apache Static files: All other content that does not require additional processing, such as images, CSS files, static HTML files, and media, will be served directly by Nginx We thus have to separate the content somehow to be provided by either server. Separating content In order to establish this separation, we can simply use two different location blocks—one that will match the dynamic file extensions and another one encompassing all the other files. This example passes requests for .php files to the proxy: server { server_name .example.com; root /home/example.com/www; [...] location ~* .php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) proxy_pass http://127.0.0.1:8080; } location / { # Your other options here for static content # for example cache control, alias... expires 30d; }} This method, although simple, will cause trouble with websites using URL rewriting. Most Web 2.0 websites now use links that hide file extensions such as http://example.com/articles/us-economy-strengthens/; some even replace file extensions with links resembling the following: http://example.com/useconomy- strengthens.html. When building a reverse-proxy configuration, you have two options: Port your Apache rewrite rules to Nginx (usually found in the .htaccess file at the root of the website), in order for Nginx to know the actual file extension of the request and proxy it to Apache correctly. If you do not wish to port your Apache rewrite rules, the default behavior shown by Nginx is to return 404 errors for such requests. However, you can alter this behavior in multiple ways, for example, by handling 404 requests with the error_page directive or by testing the existence of files before serving them. Both solutions are detailed below. Here is an implementation of this mechanism, using the error_page directive : server { server_name .example.com; root /home/example.com/www; [...] location / { # Your static files are served here expires 30d; [...] # For 404 errors, submit the query to the @proxy # named location block error_page 404 @proxy; } location @proxy { proxy_pass http://127.0.0.1:8080; }} Alternatively, making use of the if directive from the Rewrite module: server { server_name .example.com; root /home/example.com/www; [...] location / { # If the requested file extension ends with .php, # forward the query to Apache if ($request_filename ~* .php.$) { break; # prevents further rewrites proxy_pass http://127.0.0.1:8080; } # If the requested file does not exist, # forward the query to Apache if (!-f $request_filename) { break; # prevents further rewrites proxy_pass http://127.0.0.1:8080; } # Your static files are served here expires 30d; }} There is no real performance difference between both solutions, as they will transfer the same amount of requests to the backend server. You should work on porting your Apache rewrite rules to Nginx if you are looking to get optimal performance.
Read more
  • 0
  • 0
  • 16506

article-image-using-nginx-reverse-proxy
Packt
23 May 2011
7 min read
Save for later

Using Nginx as a Reverse Proxy

Packt
23 May 2011
7 min read
  Nginx 1 Web Server Implementation Cookbook Over 100 recipes to master using the Nginx HTTP server and reverse proxy         Read more about this book       (For more resources on Nginx, see here.) Introduction Nginx has found most applications acting as a reverse proxy for many sites. A reverse proxy is a type of proxy server that retrieves resources for a client from one or more servers. These resources are returned to the client as though they originated from the proxy server itself. Due to its event driven architecture and C codebase, it consumes significantly lower CPU power and memory than many other better known solutions out there. This article will deal with the usage of Nginx as a reverse proxy in various common scenarios. We will have a look at how we can set up a rail application, set up load balancing, and also look at a caching setup using Nginx, which will potentially enhance the performance of your existing site without any codebase changes.   Using Nginx as a simple reverse proxy Nginx in its simplest form can be used as a reverse proxy for any site; it acts as an intermediary layer for security, load distribution, caching, and compression purposes. In effect, it can potentially enhance the overall quality of the site for the end user without any change of application source code by distributing the load from incoming requests to multiple backend servers, and also caching static, as well as dynamic content. How to do it... You will need to first define proxy.conf, which will be later included in the main configuration of the reverse proxy that we are setting up: proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;client_max_body_size 10m;client_body_buffer_size 128k;proxy_connect_timeout 90;proxy_send_timeout 90;proxy_read_timeout 90;sproxy_buffers 32 4k To use Nginx as a reverse proxy for a site running on a local port of the server, the following configuration will suffice: server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug;location / { include proxy.conf; proxy_pass http://127.0.0.1:8080; }} How it works... In this recipe, Nginx simply acts as a proxy for the defined backend server which is running on the 8080 port of the server, which can be any HTTP web application. Later in this article, other advanced recipes will have a look at how one can define more backend servers, and how we can set them up to respond to requests.   Setting up a rails site using Nginx as a reverse proxy In this recipe, we will set up a working rails site and set up Nginx working on top of the application. This will assume that the reader has some knowledge of rails and thin. There are other ways of running Nginx and rails, as well, like using Passenger Phusion. How to do it... This will require you to set up thin first, then to configure thin for your application, and then to configure Nginx. If you already have gems installed then the following command will install thin, otherwise you will need to install it from source: sudo gem install thin Now you need to generate the thin configuration. This will create a configuration in the /etc/thin directory: sudo thin config -C /etc/thin/myapp.yml -c /var/rails/myapp--servers 5 -e production Now you can start the thin service. Depending on your operating system the start up command will vary. Assuming that you have Nginx installed, you will need to add the following to the configuration file: upstream thin_cluster { server unix:/tmp/thin.0.sock; server unix:/tmp/thin.1.sock; server unix:/tmp/thin.2.sock; server unix:/tmp/thin.3.sock; server unix:/tmp/thin.4.sock;} server { listen 80; server_name www.example1.com; root /var/www.example1.com/public; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; try_files $uri $uri/index.html $uri.html @thin; location @thin { include proxy.conf; proxy_pass http://thin_cluster; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }} How it works... This is a fairly simple rails stack, where we basically configure and run five upstream thin threads which interact with Nginx through socket connections. There are a few rewrites that ensure that Nginx serves the static files, and all dynamic requests are processed by the rails backend. It can also be seen how we set proxy headers correctly to ensure that the client IP is forwarded correctly to the rails application. It is important for a lot of applications to be able to access the client IP to show geo-located information, and logging this IP can be useful in identifying if geography is a problem when the site is not working properly for specific clients.   Setting up correct reverse proxy timeouts In this section we will set up correct reverse proxy timeouts which will affect your user's interaction when your backend application is unable to respond to the client's request. In such a case, it is advisable to set up some sensible timeout pages so that the user can understand that further refreshing may only aggravate the issues on the web application. How to do it... You will first need to set up proxy.conf which will later be included in the configuration: proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;client_max_body_size 10m;client_body_buffer_size 128k;proxy_connect_timeout 90;proxy_send_timeout 90;proxy_read_timeout 90;sproxy_buffers 32 4k Reverse proxy timeouts are some fairly simple flags that we need to set up in the Nginx configuration like in the following example: server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug; #set your default location location / { include proxy.conf; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_pass http://127.0.0.1:8080; }} How it works... In the preceding configuration we have set the following variables, it is fairly clear what these variables achieve in the context of the configurations:   Setting up caching on the reverse proxy In a setup where Nginx acts as the layer between the client and the backend web application, it is clear that caching can be one of the benefits that can be achieved. In this recipe, we will have a look at setting up caching for any site to which Nginx is acting as a reverse proxy. Due to extremely small footprint and modular architecture, Nginx has become quite the Swiss knife of the modern web stack. How to do it... This example configuration shows how we can use caching when utilizing Nginx as a reverse proxy web server: http { proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8mmax_size=1000m inactive=600m; proxy_temp_path /var/www/cache/tmp;...server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug; #set your default location location / { include proxy.conf; proxy_pass http://127.0.0.1:8080/; proxy_cache my-cache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m; }}} How it works... This configuration implements a simple cache with 1000MB maximum size, and keeps all HTTP response 200 pages in the cache for 60 minutes and HTTP response 404 pages in cache for 1 minute. There is an initial directive that creates the cache file on initialization, in further directives we basically configure the location that is going to be cached. It is possible to actually set up more than one cache path for multiple locations. There's more... This was a relatively small show of what can be achieved with the caching aspect of the proxy module. Here are some more directives that can be really useful in optimizing and making your stack faster and more efficient:  
Read more
  • 0
  • 0
  • 15590

article-image-squid-proxy-server-tips-and-tricks
Packt
16 Mar 2011
6 min read
Save for later

Squid Proxy Server: Tips and Tricks

Packt
16 Mar 2011
6 min read
Rotating log files frequently Tip: For better performance, it is good practice to rotate log files frequently instead of going with large files. --sysconfdir=/etc/squid/ option Tip: It's a good idea to use the --sysconfdir=/etc/squid/ option with configure, so that you can share the configuration across different Squid installations while testing. tproxy mode Tip: We should note that enabling intercept or tproxy mode disables any configured authentication mechanism. Also, IPv6 is supported for tproxy but requires very recent kernel versions. IPv6 is not supported in the intercept mode. Securing the port Tip: We should set the HTTP port carefully as the standard ports like 3128 or 8080 can pose a security risk if we don't secure the port properly. If we don't want to spend time on securing the port, we can use any arbitrary port number above 10000. ACL naming Tip: We should carefully note that one ACL name can't be used with more than one ACL type. acl destination dstdomain example.com acl destination dst 192.0.2.24 The above code is invalid as it uses ACL name destination across two different ACL types. HTTP access control Tip: The default behavior of HTTP access control is a bit tricky if access for a client can't be identified by any of the access rules. In such cases, the default behavior is to do the opposite of the last access rule. If last access rule is deny, then the action will be to allow access and vice-versa. Therefore, to avoid any confusion or undesired behavior, it's a good practice to add a deny all line after the access rules. Using the http_reply_access directive Tip: We should be really careful while using the http_reply_access directive. When a request is allowed by http_access, Squid will contact the original server, even if a rule with the http_reply_access directive denies the response. This may lead to serious security issues. For example, consider a client receiving a malicious URL, which can submit a client's critical private information using the HTTP POST method. If the client's request passes through http_access rules but the response is denied by an http_reply_access rule, then the client will be under the impression that nothing happened but a hacker will have cleverly stolen our client's private information. refresh_pattern directive Tip: Using refresh_pattern to cache the non-cacheable responses or to alter the lifetime of the cached objects, may lead to unexpected behavior or responses from the web servers. We should use this directive very carefully. Expires HTTP header Tip: We should note that the Expires HTTP header overrides min and max values. Overriding directives Tip: Please note that the directive never_direct overrides hierarchy_stoplist. Path of the PID file Tip: Setting the path of the PID file to none will prevent regular management operations like automatic log rotation or restarting Squid. The operating system will not be able to stop Squid at the time of a shutdown or restart. Parsing the configuration file Tip: It's good practice to parse the configuration file for any errors or warning using the -k parse option before issuing the reconfigure signal. Squid signals Tip: Please note that shutdown, interrupt, and kill are Squid signals and not the system kill signals which are emulated. Squid process in debug mode Tip: The Squid process running in debug mode may write a log of debugging output to the cache.log file and may quickly consume a lot of disk space. Access Control List (ACL) elements with dst Tip: ACL elements configured with dst as a ACL type works slower compared to ACLs with the src ACL type, as Squid will have to resolve the destination domain name before evaluating the ACL, which will involve a DNS query. ACL elements with srcdomain Tip: ACL elements with srcdomain as ACL types works slower, compared to ACLs with the dstdomain ACL type because Squid will have to perform a reverse DNS lookup before evaluating ACL. This will introduce significant latency. Moreover, the reverse DNS lookup may not work properly with local IP addresses. Adding port numbers Tip: We should note that the port numbers we add to the SSL ports list should be added to the safe ports list as well. Take care while using the ident protocol Tip: The ident protocol is not really secure and it's very easy to spoof an ident server. So, it should be used carefully. ident lookups Tip: Please note that the ident lookups are blocking calls and Squid will wait for the reply before it can proceed with processing the request, and that may increase the delays by a significant margin. Denied access by the http_access Tip: If a client is denied access by the http_access rule, it'll never match an http_reply_access rule. This is because, if a client's request is denied then Squid will not fetch a reply. Authentication helpers Tip: Configuring authentication helpers is of no use unless we use the proxy_auth ACL type to control access. basic_pop3_auth helper Tip: The basic_pop3_auth helper uses the Net::POP3 Perl package. So, we should make sure that we have this package installed before using the authentication helper.   --enable-ssl option Tip: : Please note that we should use the --enable-ssl option with the configure program before compiling, if we want Squid to accept HTTPS requests. Also note that several operating systems don't provide packages capable of HTTPS reverse-proxy due to GPL and policy constraints.   URL redirector programs Tip: We should be careful while using URL redirector programs because Squid passes the entire URL along with query parameters to the URL redirector program. This may lead to leakage of sensitive client information as some websites use HTTP GET methods for passing clients' private information.   Using the url_rewrite_access directive to block request types Tip: Please note that certain request types such as POST and CONNECT must not be rewritten as they may lead to errors and unexpected behavior. It's a good idea to block them using the url_rewrite_access directive. In this article we saw some tips and tricks on Squid Proxy server to enhance the performance of your network. Further resources on this subject: Configuring Apache and Nginx [Article] Different Ways of Running Squid Proxy Server [Article] Lighttpd [Book] VirtualBox 3.1: Beginner's Guide [Book] Squid Proxy Server 3.1: Beginner's Guide [Book]
Read more
  • 0
  • 2
  • 14792
article-image-microsoft-chart-xml-data
Packt
18 Nov 2009
4 min read
Save for later

Microsoft Chart with XML Data

Packt
18 Nov 2009
4 min read
Introduction SQL 2000 Server provided T-SQL language extensions to operate bi-directionally with relational and XML sources. It also provided two system stored procedures, sp_XML_preparedocument and sp_XML_removedocument, that assist the XML to Relational transformation. This support for returning XML data from relational data using the For XML clause is continued in SQL Server 2005 and SQL Server 2008 although the support for XML is lot more extensive. The shape of the data returned by the For XML clause is further modified by choosing the following modes, raw, auto, explicit, or path. As a preparation for this article we will be creating an XML document starting from the PrincetonTemp table used in a previous article, Binding MS Chart Control to LINQ Data Source Control, on this site. Creating an XML document from an SQL Table Open the SQL Server Management and create a new query [SELECT * from PrincetonTemp for XML auto]. You can use the For XML Auto clause to create a XML document (actually what you create is a fragment - a root-less XML without a processing directive) as shown in Figure 01. Figure 01: For XML Auto clause of a SELECT statement The result shown in a table has essentially two columns with the second column containing the document fragment shown in the next listing. Listing 01: <PrincetonTemp Id="1" Month="Jan " Temperature="4.000000000000000e+001" RecordHigh="6.000000000000000e+001"/> <PrincetonTemp Id="2" Month="Feb " Temperature="3.200000000000000e+001" RecordHigh="5.000000000000000e+001"/> <PrincetonTemp Id="3"Month="Mar " Temperature="4.300000000000000e+001" RecordHigh="6.500000000000000e+001"/> <PrincetonTemp Id="4" Month="Apr " Temperature="5.000000000000000e+001" RecordHigh="7.000000000000000e+001"/> <PrincetonTemp Id="5" Month="May " Temperature="5.300000000000000e+001" RecordHigh="7.400000000000000e+001"/> <PrincetonTemp Id="6" Month="Jun " Temperature="6.000000000000000e+001" RecordHigh="7.800000000000000e+001"/> <PrincetonTemp Id="7" Month="Jul " Temperature="6.800000000000000e+001" RecordHigh="7.000000000000000e+001"/> <PrincetonTemp Id="8" Month="Aug " Temperature="7.100000000000000e+001" RecordHigh="7.000000000000000e+001"/> <PrincetonTemp Id="9" Month="Sep " Temperature="6.000000000000000e+001" RecordHigh="8.200000000000000e+001"/> <PrincetonTemp Id="10" Month="Oct " Temperature="5.500000000000000e+001" RecordHigh="6.700000000000000e+001"/> <PrincetonTemp Id="11" Month="Nov " Temperature="4.500000000000000e+001" RecordHigh="5.500000000000000e+001"/> <PrincetonTemp Id="12" Month="Dec " Temperature="4.000000000000000e+001" RecordHigh="6.200000000000000e+001"/> This result is attribute-centric as each row of data corresponds to a row in the relational table with each column represented as an XML attribute. The same data can be extracted in an element centric manner by using the directive elements in the SELECT statement as shown in the next figure. Figure 02: For XML auto, Elements clause of a Select statement This would still give us an XML fragment but now it is displayed with element nodes as shown in the next listing (only two nodes 1 and 12 are shown). Listing 02: <PrincetonTemp><Id>1</Id><Month>Jan </Month><Temperature>4.000000000000000e+001</Temperature> <RecordHigh>6.000000000000000e+001</RecordHigh> </PrincetonTemp> ... <PrincetonTemp><Id>12</Id><Month>Dec </Month><Temperature>4.000000000000000e+001</Temperature> <RecordHigh>6.200000000000000e+001 </RecordHigh></PrincetonTemp> To make a clear distinction between the results returned by the two select statements the first row of data is shown in blue. This has returned elements and not attributes. As you can see the returned XML still lacks a root element as well as the XML processing directive. To continue with displaying this data in MS Chart Save Listing 2 as PrincetonXMLDOC.xml to a location of your choice. Create a Framework 3.5 Web Site project Let us create a web site project and display the chart on the Default.aspx page. Open Visual Studio 2008 from its shortcut on the desktop. Click File  New | Web Site...|(or Shift+Alt+N) to open the New Web Site window. Change the default name of the site to a name of your choice (herein Chart_XMLWeb) as shown. Make sure you are creating a .NET Framework 3.5 web site as shown here. Figure 03: New Framework 3.5 Web Site Project Click on APP_Data folder in the solution explorer as shown in the next figure and click on Add Existing Item… menu item. Figure 04: Add an existing item to the web site folder In the interactive window that gets displayed browse to the location where you saved the PrincetonXMLDOC.xml file and click Add button. This will add the XML file to the ADD_Data folder of the web site project. Double click PrincetonXMLDOC.xml in the web site project folder to display and verify its contents as shown in the next figure. Only nodes 1 and 12 are shown expanded. As mentioned previously this is an XML fragment. Figure 05: Imported PrincetonXMLDOC.xml Modify this document by adding the <root/> as well as the XML processing instruction as shown in the next figure. Build the project. Figure 06: Modified PrincetonXMLDOX.xml (valid XML document)
Read more
  • 0
  • 0
  • 14701

article-image-creating-tfs-scheduled-jobs
Packt
28 Sep 2015
12 min read
Save for later

Creating TFS Scheduled Jobs

Packt
28 Sep 2015
12 min read
In this article by Gordon Beeming, the author of the book, Team Foundation Server 2015 Customization, we are going to cover TFS scheduled jobs. The topics that we are going to cover include: Writing a TFS Job Deploying a TFS Job Removing a TFS Job You would want to write a scheduled job for any logic that needs to be run at specific times, whether it is at certain increments or at specific times of the day. A scheduled job is not the place to put logic that you would like to run as soon as some other event, such as a check-in or a work item change, occurs. It will automatically link change sets to work items based on the comments. (For more resources related to this topic, see here.) The project setup First off, we'll start with our project setup. This time, we'll create a Windows console application. Creating a new windows console application The references that we'll need this time around are: Microsoft.VisualStudio.Services.WebApi.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.Framework.Server.dll All of these can be found in C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgent on the TFS server. That's all the setup that is required for your TFS job project. Any class that inherit ITeamFoundationJobExtension will be able to be used for a TFS Job. Writing the TFS job So, as mentioned, we are going to need a class that inherits from ITeamFoundationJobExtension. Let's create a class called TfsCommentsToChangeSetLinksJob and inherit from ITeamFoundationJobExtension. As part of this, we will need to implement the Run method, which is part of an interface, like this: public class TfsCommentsToChangeSetLinksJob : ITeamFoundationJobExtension { public TeamFoundationJobExecutionResult Run( TeamFoundationRequestContext requestContext, TeamFoundationJobDefinition jobDefinition, DateTime queueTime, out string resultMessage) { throw new NotImplementedException(); } } Then, we also add the using statement: using Microsoft.TeamFoundation.Framework.Server; Now, for this specific extension, we'll need to add references to the following: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll All of these can be found in C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgent. Now, for the logic of our plugin, we use the following code inside of the Run method as a basic shell, where we'll then place the specific logic for this plugin. This basic shell will be adding a try catch block, and at the end of the try block, it will return a successful job run. We'll then add to the job message what exception may be thrown and returning that the job failed: resultMessage = string.Empty; try { // place logic here return TeamFoundationJobExecutionResult.Succeeded; } catch (Exception ex) { resultMessage += "Job Failed: " + ex.ToString(); return TeamFoundationJobExecutionResult.Failed; } Along with this code, you will need the following using function: using Microsoft.TeamFoundation; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.VersionControl.Client; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Linq; using System.Text.RegularExpressions; So next, we need to place some logic specific to this job in the try block. First, let's create a connection to TFS for version control: TfsTeamProjectCollection tfsTPC = TfsTeamProjectCollectionFactory.GetTeamProjectCollection( new Uri("http://localhost:8080/tfs")); VersionControlServer vcs = tfsTPC.GetService<VersionControlServer>(); Then, we will query the work item store's history and get the last 25 check-ins: WorkItemStore wis = tfsTPC.GetService<WorkItemStore>(); // get the last 25 check ins foreach (Changeset changeSet in vcs.QueryHistory("$/", RecursionType.Full, 25)) { // place the next logic here } Now that we have the changeset history, we are going to check the comments for any references to work items using a simple regex expression: //try match the regex for a hash number in the comment foreach (Match match in Regex.Matches((changeSet.Comment ?? string.Empty), @"#d{1,}")) { // place the next logic here } Getting into this loop, we'll know that we have found a valid number in the comment and that we should attempt to link the check-in to that work item. But just the fact that we have found a number doesn't mean that the work item exists, so let's try find a work item with the found number: int workItemId = Convert.ToInt32(match.Value.TrimStart('#')); var workItem = wis.GetWorkItem(workItemId); if (workItem != null) { // place the next logic here } Here, we are checking to make sure that the work item exists so that if the workItem variable is not null, then we'll proceed to check whether a relationship for this changeSet and workItem function already exists: //now create the link ExternalLink changesetLink = new ExternalLink( wis.RegisteredLinkTypes[ArtifactLinkIds.Changeset], changeSet.ArtifactUri.AbsoluteUri); //you should verify if such a link already exists if (!workItem.Links.OfType<ExternalLink>() .Any(l => l.LinkedArtifactUri == changeSet.ArtifactUri.AbsoluteUri)) { // place the next logic here } If a link does not exist, then we can add a new link: changesetLink.Comment = "Change set " + $"'{changeSet.ChangesetId}'" + " auto linked by a server plugin"; workItem.Links.Add(changesetLink); workItem.Save(); resultMessage += $"Linked CS:{changeSet.ChangesetId} " + $"to WI:{workItem.Id}"; We just have the extra bit here so as to get the last 25 change sets. If you were using this for production, you would probably want to store the last change set that you processed and then get history up until that point, but I don't think it's needed to illustrate this sample. Then, after getting the list of change sets, we basically process everything 100 percent as before. We check whether there is a comment and whether that comment contains a hash number that we can try linking to a changeSet function. We then check whether a workItem function exists for the number that we found. Next, we add a link to the work item from the changeSet function. Then, for each link we add to the overall resultMessage string so that when we look at the results from our job running, we can see which links were added automatically for us. As you can see, with this approach, we don't interfere with the check-in itself but rather process this out-of-hand way of linking changeSet to work with items at a later stage. Deploying our TFS Job Deploying the code is very simple; change the project's Output type to Class Library. This can be done by going to the project properties, and then in the Application tab, you will see an Output type drop-down list. Now, build your project. Then, copy the TfsJobSample.dll and TfsJobSample.pdb output files to the scheduled job plugins folder, which is C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgentPlugins. Unfortunately, simply copying the files into this folder won't make your scheduled job automatically installed, and the reason for this is that as part of the interface of the scheduled job, you don't specify when to run your job. Instead, you register the job as a separate step. Change Output type back to Console Application option for the next step. You can, and should, split the TFS job from its installer into different projects, but in our sample, we'll use the same one. Registering, queueing, and deregistering a TFS Job If you try install the job the way you used to in TFS 2013, you will now get the TF400444 error: TF400444: The creation and deletion of jobs is no longer supported. You may only update the EnabledState or Schedule of a job. Failed to create, delete or update job id 5a7a01e0-fff1-44ee-88c3-b33589d8d3b3 This is because they have made some changes to the job service, for security reasons, and these changes prevent you from using the Client Object Model. You are now forced to use the Server Object Model. The code that you have to write is slightly more complicated and requires you to copy your executable to multiple locations to get it working properly. Place all of the following code in your program.cs file inside the main method. We start off by getting some arguments that are passed through to the application, and if we don't get at least one argument, we don't continue: #region Collect commands from the args if (args.Length != 1 && args.Length != 2) { Console.WriteLine("Usage: TfsJobSample.exe <command "+ "(/r, /i, /u, /q)> [job id]"); return; } string command = args[0]; Guid jobid = Guid.Empty; if (args.Length > 1) { if (!Guid.TryParse(args[1], out jobid)) { Console.WriteLine("Job Id not a valid Guid"); return; } } #endregion We then wrap all our logic in a try catch block, and for our catch, we only write the exception that occurred: try { // place logic here } catch (Exception ex) { Console.WriteLine(ex.ToString()); } Place the next steps inside the try block, unless asked to do otherwise. As part of using the Server Object Model, you'll need to create a DeploymentServiceHost. This requires you to have a connection string to the TFS Configuration database, so make sure that the connection string set in the following is valid for you. We also need some other generic path information, so we'll mimic what we could expect the job agents' paths to be: #region Build a DeploymentServiceHost string databaseServerDnsName = "localhost"; string connectionString = $"Data Source={databaseServerDnsName};"+ "Initial Catalog=TFS_Configuration;Integrated Security=true;"; TeamFoundationServiceHostProperties deploymentHostProperties = new TeamFoundationServiceHostProperties(); deploymentHostProperties.HostType = TeamFoundationHostType.Deployment | TeamFoundationHostType.Application; deploymentHostProperties.Id = Guid.Empty; deploymentHostProperties.PhysicalDirectory = @"C:Program FilesMicrosoft Team Foundation Server 14.0"+ @"Application TierTFSJobAgent"; deploymentHostProperties.PlugInDirectory = $@"{deploymentHostProperties.PhysicalDirectory}Plugins"; deploymentHostProperties.VirtualDirectory = "/"; ISqlConnectionInfo connInfo = SqlConnectionInfoFactory.Create(connectionString, null, null); DeploymentServiceHost host = new DeploymentServiceHost(deploymentHostProperties, connInfo, true); #endregion Now that we have a TeamFoundationServiceHost function, we are able to create a TeamFoundationRequestContext function . We'll need it to call methods such as UpdateJobDefinitions, which adds and/or removes our job, and QueryJobDefinition, which is used to queue our job outside of any schedule: using (TeamFoundationRequestContext requestContext = host.CreateSystemContext()) { TeamFoundationJobService jobService = requestContext.GetService<TeamFoundationJobService>() // place next logic here } We then create a new TeamFoundationJobDefinition instance with all of the information that we want for our TFS job, including the name, schedule, and enabled state: var jobDefinition = new TeamFoundationJobDefinition( "Comments to Change Set Links Job", "TfsJobSample.TfsCommentsToChangeSetLinksJob"); jobDefinition.EnabledState = TeamFoundationJobEnabledState.Enabled; jobDefinition.Schedule.Add(new TeamFoundationJobSchedule { ScheduledTime = DateTime.Now, PriorityLevel = JobPriorityLevel.Normal, Interval = 300, }); Once we have the job definition, we check what the command was and then execute the code that will relate to that command. For the /r command, we will just run our TFS job outside of the TFS job agent: if (command == "/r") { string resultMessage; new TfsCommentsToChangeSetLinksJob().Run(requestContext, jobDefinition, DateTime.Now, out resultMessage); } For the /i command, we will install the TFS job: else if (command == "/i") { jobService.UpdateJobDefinitions(requestContext, null, new[] { jobDefinition }); } For the /u command, we will uninstall the TFS Job: else if (command == "/u") { jobService.UpdateJobDefinitions(requestContext, new[] { jobid }, null); } Finally, with the /q command, we will queue the TFS job to be run inside the TFS job agent and outside of its schedule: else if (command == "/q") { jobService.QueryJobDefinition(requestContext, jobid); } Now that we have this code in the program.cs file, we need to compile the project and then copy TfsJobSample.exe and TfsJobSample.pdb to the TFS Tools folder, which is C:Program FilesMicrosoft Team Foundation Server 14.0Tools. Now open a cmd window as an administrator. Change the directory to the Tools folder and then run your application with a /i command, as follows: Installing the TFS Job Now, you have successfully installed the TFS Job. To uninstall it or force it to be queued, you will need the job ID. But basically you have to run /u with the job ID to uninstall, like this: Uninstalling the TFS Job You will be following the same approach as prior for queuing, simply specifying the /q command and the job ID. How do I know whether my TFS Job is running? The easiest way to check whether your TFS Job is running or not is to check out the job history table in the configuration database. To do this, you will need the job ID (we spoke about this earlier), which you can obtain by running the following query against the TFS_Configuration database: SELECT JobId FROM Tfs_Configuration.dbo.tbl_JobDefinition WITH ( NOLOCK ) WHERE JobName = 'Comments to Change Set Links Job' With this JobId, we will then run the following lines to query the job history: SElECT * FROM Tfs_Configuration.dbo.tbl_JobHistory WITH (NOLOCK) WHERE JobId = '<place the JobId from previous query here>' This will return you a list of results about the previous times the job was run. If you see that your job has a Result of 6 which is extension not found, then you will need to stop and restart the TFS job agent. You can do this by running the following commands in an Administrator cmd window: net stop TfsJobAgent net start TfsJobAgent Note that when you stop the TFS job agent, any jobs that are currently running will be terminated. Also, they will not get a chance to save their state, which, depending on how they were written, could lead to some unexpected situations when they start again. After the agent has started again, you will see that the Result field is now different as it is a job agent that will know about your job. If you prefer browsing the web to see the status of your jobs, you can browse to the job monitoring page (_oi/_jobMonitoring#_a=history), for example, http://gordon-lappy:8080/tfs/_oi/_jobMonitoring#_a=history. This will give you all the data that you can normally query but with nice graphs and grids. Summary In this article, we looked at how to write, install, uninstall, and queue a TFS Job. You learned that the way we used to install TFS Jobs will no longer work for TFS 2015 because of a change in the Client Object Model for security. Resources for Article: Further resources on this subject: Getting Started with TeamCity[article] Planning for a successful integration[article] Work Item Querying [article]
Read more
  • 0
  • 0
  • 13863
Modal Close icon
Modal Close icon