Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-agent-roles-groups-organizations-and-user-tags
Packt
12 Dec 2016
13 min read
Save for later

Agent Roles, Groups, Organizations, and User-Tags

Packt
12 Dec 2016
13 min read
In this article by Cedric Jacob, the author of the book Mastering Zendesk, we will learn about agent roles, groups, organizations, user-tags and how each item can be setup to serve a more advanced Zendesk setup. The reader will be guided through the different use-cases by applying the necessary actions based on the established road map. When it comes to working with an environment such as Zendesk, which was built to communicate with millions of customers, it is absolutely crucial to understand how we can manage our user accounts and their tickets without losing track of our processes. However, even when working with a smaller customer base, keeping scalability in mind, we should apply the same diligence when it comes to planning our agent roles, groups, organizations, and user tags. This article will cover the following topics: Users / agents / custom agent roles Groups Organizations User tags (For more resources related to this topic, see here.) Users/agents In Zendesk, agents are just like end-users and are classified as users. Both can be located in the same pool of listed accounts. The difference however can be found in the assigned role. The role defines what a user can or cannot do. End-users for example do not posses the necessary rights to log in to the actual helpdesk environment. Easily enough, the role for end-users is called End-user. In Zendesk, users are also referred to as people. Both are equivalent terms. The same applies to the two terms end-users and customers. You can easily access the whole list of users by following these two steps: Click on the admin icon (gear symbol) located at the bottom of Zendesk's sidebar. Click on People located under MANAGE within the admin menu: Unlike for end-users, there are a few different roles that can be assigned to an agent. Out of the box, Zendesk offers the following options: Agent/Staff Team Leader Advisor Administrator While the agent and staff roles come with the necessary permissions in order to solve tickets, the team leader role allows more access to the Zendesk environment. The advisor role, in contrast, cannot solve any tickets. This role is supposed to enable the user to manage Zendesk's workflows. This entails the ability to create and edit automations, triggers, macros, views, and SLAs. The admin role includes some additional permissions allowing the user to customize and manage the Zendesk environment. Note: The number of available roles depends on your Zendesk plan. The ability to create custom roles requires the Enterprise version of Zendesk. If you do not have the option to create custom roles and do not wish to upgrade to the Enterprise plan, you may still want to read on. Other plans still allow you to edit the existing roles. Custom agent roles Obviously, we are scratching on the surface here. So let's take a closer look into the roles by creating our own custom agent role. In order to create your own custom role, simply follow these steps:  Click on the admin icon (gear symbol) located at the bottom of Zendesk's sidebar. Click on People located under MANAGE within the admin menu. Click on role located at the top of the main area (next to "add"): The process of creating a custom role consists of naming and describing the role followed by defining the permissions: Permissions are categorized under the following headlines: Tickets People Help Center Tools Channels System Each category houses options to set individual permissions concerning that one specific topic. Let's examine these categories one by one and decide on each setting for our example role–Tier 1 Agent. Ticket Permissions In the first part, we can choose what permissions the agent should receive when it comes to handling tickets: What kind of tickets can this agent access? Those assigned to the agent only Those requested by users in this agent's organization All those within this agent's group(s) All Agent can assign the ticket to any group? Yes No What type of comments can this agent make? Private only Public and private Can edit ticket properties? Yes No Can delete tickets? Yes No Can merge tickets? Yes No Can edit ticket tags? Yes No People Permissions The second part allows us to set permissions regarding the agent's ability to manage other users/people: What access does this agent have to end-user profiles? Read only Add and edit within their organization Add, edit, and delete all May this user view lists of user profiles? Cannot browse or search for users Can view all users in your account Can add or modify groups and organizations? Yes No So what kind of access should our agent have to end-user profiles? For now, we will go for option one and choose Read only. It would make sense to forward more complicated tickets to our "Tier 2" support, who receive the permission to edit end-user profiles. Should our agent be allowed to view the full list of users? In some cases, it might be helpful if agents can search for users within the Zendesk system. In this case, we will answer our question with a "yes" and check the box. Should the agent be allowed to modify groups and organizations? None of our planned workflows seem to require this permission. We will not check this box and therefore remove another possible source of error. Help Center Permissions The third part concerns the Help Center permissions: Can manage Help Center? Yes No Does our agent need the ability to edit the Help Center? As the primary task of our "Tier 1" agents consists of answering tickets, we will not check this box and leave this permission to our administrators. Tools Permissions The fourth part gives us the option to set permissions that allow agents to make use of Zendesk Tools: What can this agent do with reports? Cannot view Can view only Can view, add, and edit What can this agent do with views? Play views only See views only Add and edit personal views Add and edit personal and group views Add and edit personal, group, and global views What can this agent do with macros? Cannot add or edit Can add and edit personal macros Can add and edit personal and group macros Can add and edit personal, group, and global macros Can access dynamic content? Yes No Should our agent have the permission to view, edit, and add reports? We do not want our agents to interact with Zendesk's reports on any level. We might, instead, want to create custom reports via GoodData, which can be sent out via e-mail to our agents. Therefore, in this case, we choose the option Cannot view. What should the agent be allowed to do with views? As we will set up all the necessary views for our agents, we will go for the "See views only" option. If there is a need for private views later on, we can always come back and change this setting retroactively. What should the "Tier 1" agent be allowed to do when it comes to macros? In our example, we want to create a very streamlined support. All creation of content should take place at the administrative level and be handled by team leaders. Therefore, we will select the "Cannot add or edit" option. Should the agent be allowed to access dynamic content? We will not check this option. The same reasons apply here: content creation will happen at the administrative level. Channels Permissions The fifth part allows us to set any permissions related to ticket channels: Can manage Facebook pages? Yes No There is no need for our "Tier 1" agents to receive any of these permissions as they are of an administrative nature. System Permissions Last but not least, we can decide on some more global system-related permissions: Can manage business rules? Yes No Can manage channels and extensions? Yes No Again, there is no need for our "Tier 1" agent to receive these permissions as they are of an administrative nature. Groups Groups, unlike organizations, are only meant for agents and each agent must be at least in one group. Groups play a major role when it comes to support workflows and can be used in many different ways. How to use groups becomes apparent when planning your support workflow. In our case, we have four types of support tickets: Tier 1 Support Tier 2 Support VIP Support Internal Support Each type of ticket is supposed to be answered by specific agents only. In order to achieve this, we can create one group for each type of ticket and later assign these groups to our agents accordingly. In order to review and edit already existing groups, simply follow these steps: Click on the admin icon (gear symbol) located at the bottom of Zendesk's sidebar. Click on People located under MANAGE within the admin menu. Click on groups located under the search bar within the main area: Creating a group is easy. We simply choose a name and tick the box next to each agent that we would like to be associated with this group: There are two ways to add an agent to a group. While you may choose to navigate to the group itself in order to edit it, you can also assign groups to agents within their own user panel. Organizations Organizations can be very helpful when managing workflows, though there is no imperative need to associate end-users with an organization. Therefore, we should ask ourselves this: Do we need to use organizations to achieve our desired workflows? Before we can answer this question, let's take a look at how organizations work in Zendesk: When creating an organization within Zendesk, you may choose one or more domains associated with that organization. As soon as an end-user creates a ticket using an e-mail address with that specific domain, the user is added to that organization. There are a few more things you can set within an organization. So let's take a quick look at all the available options. In order to add a new organization, simply follow these steps: Click on the admin icon (gear symbol) located at the bottom of Zendesk's sidebar. Click on People located under MANAGE within the admin menu. Click on organization located at the top of the main area (next to add): When adding a new organization, Zendesk asks you to provide the following details: The name of the organization The associated domains Once we click on Save, Zendesk automatically opens this organization as a tab and shows next to any ticket associated with the organization. Here are a few more options we can set up: Tags Domains Group Users Details Notes Tags Zendesk allows us to define tags, that would automatically be added to each ticket, created by a user within this organization. Domains We can add as many associated domains as we need. Each domain should be separated by a single space. Group Tickets associated with this organization can be assigned to a group automatically. We can choose any group via a drop-down menu. Users We get the following two options to choose from: Can view own tickets only Can view all org tickets This allows us to enable users, who are part of this organization, to only view their own tickets or to review all the tickets created by users within this organization. If we choose for users to view all the tickets within their organization, we receive two more options: ...but not add comments ...and add comments Details We may add additional information about the organization such as an address. Notes Additionally, we may add notes only visible for agents. User-Tags To understand user tags, we need to understand how Zendesk utilizes tags and how they can help us. Tags can be added for users, organizations, and tickets, while user tags and organization tags will be ultimately applied to tickets when they are created. For instance, if a user is tagged with the vip tag, all his tickets will subsequently be tagged with the vip tag as well: We can then use that tag as a condition in our business rules. But how can we set user tags without having to do some manually? This is a very important question. In our flowchart, we require the knowledge whether a customer is in fact a VIP user in order for our business rules to escalate the tickets according to our SLA rules. Let's take a quick look at our plan: We could send VIP information via Support Form. We could use SSO and set the VIP status via a user tag. We could set the user tag via API when the subscription is bought. In our first option, we would try to send a tag from our support form to Zendesk so that the ticket is tagged accordingly. In our second option, we would set the user tag and subsequently the ticket tag via SSO (Single Sign-On). In our last option, we would set the user tag via the Zendesk API when a subscription is bought. We remember that a customer of our ExampleComp becomes eligible for VIP service only on having bought a software subscription. In our case, we might go for option number three. It is a very clean solution and also allows us to remove the user tag when the subscription is canceled. So how can we achieve this? Luckily, Zendesk offers a greatly documented and easy to understand API. We can therefore do the necessary research and forward our requirements to our developers. Before we look at any code, we should create a quick outline: User registers on ExampleComp's website: A Zendesk user is created. User subscribes to software package: The user tag is added to the existing Zendesk user. User unsubscribes from software package: The user tag is removed from the existing Zendesk user. User deletes account from ExampleComp's website: The Zendesk user is removed. All this can easily be achieved with a few lines of code. You may want to refer your developers to the following webpage: https://developer.zendesk.com/rest_api/docs/core/users If you have coding experience, here are the necessary code snippets: For creating a new end-user: curl -v -u {email_address}:{password} https://{subdomain}.zendesk.com/api/v2/users.json -H "Content-Type: application/json" -X POST -d '{"user": {"name": "FirstName LastName", "email": "user@example.org"}}' For updating an existing user: curl -v -u {email_address}:{password} https://{subdomain}.zendesk.com/api/v2/users/{id}.json -H "Content-Type: application/json" -X PUT -d '{"user": {"name": "Roger Wilco II"}}' Summary In this article, you learned about Zendesk users, roles, groups, organizations, and user tags. Following up on our road map, we laid important groundwork for a functioning Zendesk environment by setting up some of the basic requirements for more complex workflows. Resources for Article: Further resources on this subject: Deploying a Zabbix proxy [article] Inventorying Servers with PowerShell [article] Designing Puppet Architectures [article]
Read more
  • 0
  • 0
  • 2135

article-image-xamarinforms
Packt
09 Dec 2016
11 min read
Save for later

Xamarin.Forms

Packt
09 Dec 2016
11 min read
Since the beginning of Xamarin's lifetime as a company, their motto has always been to present the native APIs on iOS and Android idiomatically to C#. This was a great strategy in the beginning, because applications built with Xamarin.iOS or Xamarin.Android were pretty much indistinguishable from a native Objective-C or Java applications. Code sharing was generally limited to non-UI code, which left a potential gap to fill in the Xamarin ecosystem—a cross-platform UI abstraction. Xamarin.Forms is the solution to this problem, a cross-platform UI framework that renders native controls on each platform. Xamarin.Forms is a great framework for those that know C# (and XAML), but also may not want to get into the full details of using the native iOS and Android APIs. In this article by Jonathan Peppers, author of the book Xamarin 4.x Cross-Platform Application Development - Third Edition, we will discuss the following topics: Use XAML with Xamarin.Forms Cover data binding and MVVM with Xamarin.Forms (For more resources related to this topic, see here.) Using XAML in Xamarin.Forms In addition to defining Xamarin.Forms controls from C# code, Xamarin has provided the tooling for developing your UI in XAML (Extensible Application Markup Language). XAML is a declarative language that is basically a set of XML elements that map to a certain control in the Xamarin.Forms framework. Using XAML is comparable to what you would think of using HTML to define the UI on a webpage, with the exception that XAML in Xamarin.Forms is creating a C# objects that represent a native UI. To understand how XAML works in Xamarin.Forms, let's create a new page with lots of UI on it. Return to your HelloForms project from earlier, and open the HelloFormsPage.xaml file. Add the following XAML code between the <ContentPage> tag: <StackLayout Orientation="Vertical" Padding="10,20,10,10"> <Label Text="My Label" XAlign="Center" /> <Button Text="My Button" /> <Entry Text="My Entry" /> <Image Source="https://www.xamarin.com/content/images/ pages/branding/assets/xamagon.png" /> <Switch IsToggled="true" /> <Stepper Value="10" /> </StackLayout> Go ahead and run the application on iOS and Android, your application will look something like the following screenshots: First, we created a StackLayout control, which is a container for other controls. It can layout controls either vertically or horizontally one by one as defined by the Orientation value. We also applied a padding of 10 around the sides and bottom, and 20 from the top to adjust for the iOS status bar. You may be familiar with this syntax for defining rectangles if you are familiar with WPF or Silverlight. Xamarin.Forms uses the same syntax of left, top, right, and bottom values delimited by commas.  We also used several of the built-in Xamarin.Forms controls to see how they work: Label: We used this earlier in the article. Used only for displaying text, this maps to a UILabel on iOS and a TextView on Android. Button: A general purpose button that can be tapped by a user. This control maps to a UIButton on iOS and a Button on Android. Entry: This control is a single-line text entry. It maps to a UITextField on iOS and an EditText on Android. Image: This is a simple control for displaying an image on the screen, which maps to a UIImage on iOS and an ImageView on Android. We used the Source property of this control, which loads an image from a web address. Using URLs on this property is nice, but it is best for performance to include the image in your project where possible. Switch: This is an on/off switch or toggle button. It maps to a UISwitch on iOS and a Switch on Android. Stepper: This is a general-purpose input for entering numbers via two plus and minus buttons. On iOS this maps to a UIStepper, while on Android Xamarin.Forms implements this functionality with two Buttons. This are just some of the controls provided by Xamarin.Forms. There are also more complicated controls such as the ListView and TableView you would expect for delivering mobile UIs. Even though we used XAML in this example, you could also implement this Xamarin.Forms page from C#. Here is an example of what that would look like: public class UIDemoPageFromCode : ContentPage { public UIDemoPageFromCode() { var layout = new StackLayout { Orientation = StackOrientation.Vertical, Padding = new Thickness(10, 20, 10, 10), }; layout.Children.Add(new Label { Text = "My Label", XAlign = TextAlignment.Center, }); layout.Children.Add(new Button { Text = "My Button", }); layout.Children.Add(new Image { Source = "https://www.xamarin.com/content/images/pages/ branding/assets/xamagon.png", }); layout.Children.Add(new Switch { IsToggled = true, }); layout.Children.Add(new Stepper { Value = 10, }); Content = layout; } } So you can see where using XAML can be a bit more readable, and is generally a bit better at declaring UIs. However, using C# to define your UIs is still a viable, straightforward approach. Using data-binding and MVVM At this point, you should be grasping the basics of Xamarin.Forms, but are wondering how the MVVM design pattern fits into the picture. The MVVM design pattern was originally conceived for its use along with XAML and the powerful data binding features XAML provides, so it is only natural that it is a perfect design pattern to be used with Xamarin.Forms. Let's cover the basics of how data-binding and MVVM is setup with Xamarin.Forms: Your Model and ViewModel layers will remain mostly unchanged from the MVVM pattern. Your ViewModels should implement the INotifyPropertyChanged interface, which facilitates data binding. To simplify things in Xamarin.Forms, you can use the BindableObject base class and call OnPropertyChanged when values change on your ViewModels. Any Page or control in Xamarin.Forms has a BindingContext, which is the object that it is data bound to. In general, you can set a corresponding ViewModel to each view's BindingContext property. In XAML, you can setup a data binding by using syntax of the form Text="{Binding Name}". This example would bind the Text property of the control to a Name property of the object residing in the BindingContext. In conjunction with data binding, events can be translated to commands using the ICommand interface. So for example, a Button's click event can be data bound to a command exposed by a ViewModel. There is a built-in Command class in Xamarin.Forms to support this. Data binding can also be setup from C# code in Xamarin.Forms via the Binding class. However, it is generally much easier to setup bindings from XAML, since the syntax has been simplified with XAML markup extensions. Now that we have covered the basics, let's go through step-by-step and to use Xamarin.Forms. For the most part we can reuse most of the Model and ViewModel layers, although we will have to make a few minor changes to support data binding from XAML. Let's begin by creating a new Xamarin.Forms application backed by a PCL named XamSnap: First create three folders in the XamSnap project named Views, ViewModels, and Models. Add the appropriate ViewModels and Models. Build the project, just to make sure everything is saved. You will get a few compiler errors we will resolve shortly. The first class we will need to edit is the BaseViewModel class, open it and make the following changes: public class BaseViewModel : BindableObject { protected readonly IWebService service = DependencyService.Get<IWebService>(); protected readonly ISettings settings = DependencyService.Get<ISettings>(); bool isBusy = false; public bool IsBusy { get { return isBusy; } set { isBusy = value; OnPropertyChanged(); } } } First of all, we removed the calls to the ServiceContainer class, because Xamarin.Forms provides it's own IoC container called the DependencyService. It has one method, Get<T>, and registrations are setup via an assembly attribute that we will setup shortly. Additionally we removed the IsBusyChanged event, in favor of the INotifyPropertyChanged interface that supports data binding. Inheriting from BindableObject gave us the helper method, OnPropertyChanged, which we use to inform bindings in Xamarin.Forms that the value has changed. Notice we didn't pass a string containing the property name to OnPropertyChanged. This method is using a lesser-known feature of .NET 4.0 called CallerMemberName, which will automatically fill in the calling property's name at runtime. Next, let's setup our needed services with the DependencyService. Open App.xaml.cs in the root of the PCL project and add the following two lines above the namespace declaration: [assembly: Dependency(typeof(XamSnap.FakeWebService))] [assembly: Dependency(typeof(XamSnap.FakeSettings))] The DependencyService will automatically pick up these attributes and inspect the types we declared. Any interfaces these types implement will be returned for any future callers of DependencyService.Get<T>. I normally put all Dependency declarations in the App.cs file, just so they are easy to manage and in one place. Next, let's modify LoginViewModel by adding a new property: public Command LoginCommand { get; set; } We'll use this shortly for data binding a button's command. One last change in the view model layer is to setup INotifyPropertyChanged for the MessageViewModel: Conversation[] conversations; public Conversation[] Conversations { get { return conversations; } set { conversations = value; OnPropertyChanged(); } } Likewise, you could repeat this pattern for the remaining public properties throughout the view model layer, but this is all we will need for this example. Next, let's create a new Forms ContentPage Xaml file under the Views folder named LoginPage. In the code-behind file, LoginPage.xaml.cs, we'll just need to make a few changes: public partial class LoginPage : ContentPage { readonly LoginViewModel loginViewModel = new LoginViewModel(); public LoginPage() { Title = "XamSnap"; BindingContext = loginViewModel; loginViewModel.LoginCommand = new Command(async () => { try { await loginViewModel.Login(); await Navigation.PushAsync(new ConversationsPage()); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } }); InitializeComponent(); } } We did a few important things here, including setting the BindingContext to our LoginViewModel. We setup the LoginCommand, which basically invokes the Login method and displays a message if something goes wrong. It also navigates to a new page if successful. We also set the Title, which will show up in the top navigation bar of the application. Next, open LoginPage.xaml and we'll add the following XAML code inside the ContentPage's content: <StackLayout Orientation="Vertical" Padding="10,10,10,10"> <Entry Placeholder="Username" Text="{Binding UserName}" /> <Entry Placeholder="Password" Text="{Binding Password}" IsPassword="true" /> <Button Text="Login" Command="{Binding LoginCommand}" /> <ActivityIndicator IsVisible="{Binding IsBusy}" IsRunning="true" /> </StackLayout> This will setup the basics of two text fields, a button, and a spinner complete with all the bindings to make everything work. Since we setup the BindingContext from the LoginPage code behind, all the properties are bound to the LoginViewModel. Next, create a ConversationsPage as a XAML page just like before, and edit the ConversationsPage.xaml.cs code behind: public partial class ConversationsPage : ContentPage { readonly MessageViewModel messageViewModel = new MessageViewModel(); public ConversationsPage() { Title = "Conversations"; BindingContext = messageViewModel; InitializeComponent(); } protected async override void OnAppearing() { try { await messageViewModel.GetConversations(); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } } } In this case, we repeated a lot of the same steps. The exception is that we used the OnAppearing method as a way to load the conversations to display on the screen. Now let's add the following XAML code to ConversationsPage.xaml: <ListView ItemsSource="{Binding Conversations}"> <ListView.ItemTemplate> <DataTemplate> <TextCell Text="{Binding UserName}" /> </DataTemplate> </ListView.ItemTemplate> </ListView> In this example, we used a ListView to data bind a list of items and display on the screen. We defined a DataTemplate class, which represents a set of cells for each item in the list that the ItemsSource is data bound to. In our case, a TextCell displaying the Username is created for each item in the Conversations list. Last but not least, we must return to the App.xaml.cs file and modify the startup page: MainPage = new NavigationPage(new LoginPage()); We used a NavigationPage here so that Xamarin.Forms can push and pop between different pages. This uses a UINavigationController on iOS, so you can see how the native APIs are being used on each platform. At this point, if you compile and run the application, you will get a functional iOS and Android application that can login and view a list of conversations: Summary In this article we covered the basics of Xamarin.Forms and how it can be very useful for building your own cross-platform applications. Xamarin.Forms shines for certain types of apps, but can be limiting if you need to write more complicated UIs or take advantage of native drawing APIs. We discovered how to use XAML for declaring our Xamarin.Forms UIs and understood how Xamarin.Forms controls are rendered on each platform. We also dove into the concepts of data binding and how to use the MVVM design pattern with Xamarin.Forms. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Where Is My Data and How Do I Get to It? [article] Configuring and Managing the Mailbox Server Role [article]
Read more
  • 0
  • 0
  • 24378

article-image-designing-system-center-configuration-manager-infrastructure
Packt
09 Dec 2016
10 min read
Save for later

Designing a System Center Configuration Manager Infrastructure

Packt
09 Dec 2016
10 min read
In this article by Samir Hammoudi and Chuluunsuren Damdinsuren, the authors of Microsoft System Center Configuration Manager Cookbook- Second Edition,we will cover the following recipes: What's changed from System Center 2012 Configuration Manager? System Center Configuration Manager's new servicing models In this article, we will learn the new servicing model, and walk through the various setup scenarios and configurations for System Center Configuration Manager Current Branch (SCCM CB). Designing and keeping a System Center Configuration Manager (SCCM) infrastructure current by using best practices such as keeping SQL server on the site, offloading some roles as needed, and in-place upgrades from CM12. What's changed from System Center 2012 Configuration Manager? We will go through the new features, changes, and removed features in CM since CM 2012. Getting ready The following are the new features in CM since CM12: In-console updates for Configuration Manager: CM uses an in-console service method called Updates and Servicing that makes it easy to locate and install updates for CM. Service Connection Point: The Microsoft Intune connector is replaced by a new site system role named Service Connection Point. The service connection point is used as a point of contact for devices you manage with, upload usage and diagnostic data to the Microsoft cloud service, and makes updates that apply within the CM console. Windows 10 Servicing: You can view the dashboard which tracks all Windows 10 PCs in your environment, create servicing plans to ensure Windows 10 PCs are kept up to date, and also view alerts when Windows 10 clients are near to the end of a CB/CBB support cycle. How to do it... Whats new in CM Capabilities This information is based on versions 1511 and 1602. You can find out if the change is made in 1602 or later by looking for the version 1602 or later tag. You can find the latest changes at https://technet.microsoft.com/en-us/library/mt757350.aspx. Endpoint Protection anti-malware: Real-time protection: This blocks potentially unwanted applications at download and prior to installation Scan settings: This scans mapped network drives when running a full scan Auto sample file submission settings: This is used to manage the behavior Exclusion settings: This section of the policy is improved to allow device exclusions Software updates: CM can differentiate a Windows 10 computer that connects to Windows Update for Business (WUfB) versus the computers connected to SUP You can schedule, or run manually, the WSUS clean up task from the CM console CM has the ability to manage Office 365 client updates by using the SUP (version 1602 or later) Application management: This supports Universal Windows Platform (UWP) apps The user-available apps now appear in Software Center When you create an in-house iOS app you only need to specify the installer (.ipa) file You can still enter the link directly, but you can now browse the store for the app directly from the CM console CM now supports apps you purchase in volume from the Apple Volume-Purchase Program (VPP) (version 1602 or later) Use CM app configuration policies to supply settings that might be required when the user runs an iOS app (version 1602 or later) Operating system deployment: A new task sequence (TS) type is available to upgrade computers from Windows 7/8/8.1 to Windows 10 Windows PE Peer Cache is now available that runs a TS using Windows PE Peer Cache to obtain content from a local peer, instead of running it from a DP You can now view the state, deploy the servicing plans, and get alerts of WaaS in your environment, to keep the Windows 10 current branch updated Client deployment: You can test new versions of the CM client before upgrading the rest of the site with the new software Site infrastructure: CM sites support the in-place upgrade of the site server's OS from Windows Server 2008 R2 to Windows Server 2012 R2 (version 1602 or later) SQL Server AlwaysOn is supported for CM (version 1602 or later) CM supports Microsoft Passport for Work which is an alternative sign-in method to replace a password, smart card, or virtual smart card Compliance settings: When you create a configuration item, only the settings relevant to the selected platform are available It is now easier to choose the configuration item type in the create configuration item wizard and has a number of new settings It provides support for managing settings on Mac OS X computers You can now specify kiosk mode settings for Samsung KNOX devices. (version 1602 or later) Conditional access: Conditional access to Exchange Online and SharePoint Online is supported for PCs managed by CM (version 1602 or later) You can now restrict access to e-mail and 0365 services based on the report of the Health Attestation Service (version 1602 or later) New compliance policy rules like automatic updates and passwords to unlock devices, have been added to support better security requirements (version 1602 or later) Enrolled and compliant devices always have access to Exchange On-Premises (version 1602 or later) Client management: You can now see whether a computer is online or not via its status (version 1602 or later) A new option, Sync Policy has been added by navigating to the Software Center | Options | Computer Maintenance which refreshes its machine and user policy (version 1602 or later) You can view the status of Windows 10 Device Health Attestation in the CM console (version 1602 or later) Mobile device management with Microsoft Intune: Improved the number of devices a user can enroll Specify terms and conditions users of the company portal must accept before they can enroll or use the app Added a device enrollment manager role to help manage large numbers of devices CM can help you manage iOS Activation Lock, a feature of the Find My iPhone app for iOS 7.1 and later devices (version 1602 or later) You can monitor terms and conditions deployments in the CM console (version 1602 or later) On-premises Mobile Device Management: You can now manage mobile devices using on-premises CM infrastructure via a management interface that is built into the device OS Removed features There are two features that were removed from CM current branch's initial release in December 2015, and there will be no more support on these features. If your organization uses these features, you need to find alternatives or stay with CM12. Out of Band Management: With Configuration Manager, native support for AMT-based computers from within the CM console has been removed. Network Access Protection: CM has removed support for Network Access Protection. The feature has been deprecated in Windows Server 2012 R2 and is removed from Windows 10. See also Refer to the TechNet documentation on CM changes at https://technet.microsoft.com/en-us/library/mt622084.aspx System Center Configuration Manager's new servicing models The new concept servicing model is one of the biggest changes in CM. We will learn what the servicing model is and how to do it in this article. Getting Ready Windows 10's new servicing models Before we dive into the new CM servicing model, we first need to understand the new Windows 10 servicing model approach called Windows as a Service (WaaS). Microsoft regularly gets asked for advice on how to keep Windows devices secure, reliable, and compatible. Microsoft has a pretty strong point-of-view on this: Your devices will be more secure, more reliable, and more compatible if you are keeping up with the updates we regularly release. In a mobile-first, cloud-first world, IT expects to have new value and new capabilities constantly flowing to them. Most users have smart phones and regularly accept the updates to their apps from the various app stores. The iOS and Android ecosystems also release updates to the OS on a regular cadence. With this in mind, Microsoft is committed to continuously rolling out new capabilities to users around the world, but Windows is unique in that it is used in an incredibly broad set of scenarios, from a simple phone to some of the most complex and mission critical use scenarios in factories and hospitals. It is clear that one model does not fit all of these scenarios. To strike a balance between the needed updates for such a wide range of device types, there are four servicing options (summarized in Table 1) you will want to completely understand. Table 1. Windows 10 servicing options (WaaS) Servicing Models Key Benefits Support Lifetime Editions Target Scenario Windows Insider Program Enables testing new features before release N/A Home, Pro, Enterprise, Education IT Pros, Developers Current Branch (CB) Makes new features available to users immediately Approximately 4 months Home, Pro, Enterprise, Education Consumers, limited number of Enterprise users Current Branch for Business (CBB) Provides additional testing time through Current Branch Approximately 8 months Pro, Enterprise, Education Enterprise users Long-Term Servicing Branch (LTSB) Enables long-term low changing deployments like previous Windows versions 10 Years Enterprise LTSB ATM, Line machines, Factory control How to do it... How will CM support Windows 10? As you read in the previous section, Windows 10 brings with it new options for deployment and servicing models. On the System Center side, it has to provide enterprise customers with the best management for Windows 10 with CM by helping you deploy, manage, and service Windows 10. Windows 10 comes in two basic types: a Current Branch/Current Branch for Business with fast version model, and the LTSB with a more traditional support model. Therefore, Microsoft has released a new version of CM to provide full support for the deployment, upgrade, and management of Windows 10 in December 2015. The new CM (simply without calendar year) is called Configuration Manager Current Branch (CMCB), and designed to support the much faster pace of updates for Windows 10, by being updated periodically. This new version will also simplify the CM upgrade experience itself. One of the core capabilities of this release is a brand new approach for updating the features and functionality of CM. Moving faster with CM will allow you to take advantage of the very latest feature innovations in Windows 10, as well as other operating systems such as Apple iOS and Android when using mobile device management (MDM) and mobile application management (MAM) capabilities. The new features for CM are in-console Updates-and-Servicing processes that replace the need to learn about, locate, and download updates from external sources. This means no more service packs or cumulative update versions to track. Instead, when you use the CM current branch, you periodically install in-console updates to get a new version. New update versions release periodically and will include product updates and can also introduce new features you may choose to use (or not use) in your deployment. Because CM will be updated frequently, will be denoted each particular version with a version number, for example 1511 for a version shipped in December 2015. Updates will be released for the current branch about three times a year. The first release of the current branch was 1511 in December 2015, followed by 1602 in March 2016. Each update version is supported for 12 months from its general availability release date. Why is there another version called Configuration Manager LTSB 2016? There will be a release named System Center Configuration Manager LTSB 2016 that aligns with the release of Windows Server 2016 and System Center 2016. With this version, as like previous versions 2007 and 2012, you do not have to update the Configuration Manager Site Servers like the current branch. Table 2. Configuration Manager Servicing Options: Servicing Options Benefits Support Lifetime Intended Target Clients CM CB Fully supports any type of Windows 10 Approximately 12 months Windows 10 CB/CBB, Windows 10 Configuration Manager LTSB 2016 You do not need to update frequently 10 Years Windows 10 LTSB Summary In this article we learned the new servicing model, and walked through the various setup scenarios and configurations for SCCM CB. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Where Is My Data and How Do I Get to It? [article] Configuring and Managing the Mailbox Server Role [article]
Read more
  • 0
  • 0
  • 4627

Packt
09 Dec 2016
4 min read
Save for later

What’s New in SQL Server 2016 Reporting Services

Packt
09 Dec 2016
4 min read
In this article by Robert C. Cain, coauthor of the book SQL Server 2016 Reporting Services Cookbook, we’ll take a brief tour of the new features in SQL Server 2016 Reporting Services. SQL Server 2016 Reporting Services is a true evolution in reporting technology. After making few changes to SSRS over the last several releases, Microsoft unveiled a virtual cornucopia of new features. (For more resources related to this topic, see here.) Report Portal The old Report Manager has received a complete facelift, along with many added new features. Along with it came a rename, it is now known as the Report Portal. The following is a screenshot of the new portal: KPIs KPIs are the first feature you’ll notice. The Report Portal has the ability to display key performance indicators directly, meaning your users can get important metrics at a glance, without the need to open reports. In addition, these KPIs can be linked to other report items such as reports and dashboards, so that a user can simply click on them to find more information. Mobile Reporting Microsoft recognized the users in your organization no longer use just a computer to retrieve their information. Mobile devices, such as phones and tablets, are now commonplace. You could, of course, design individual reports for each platform, but that would cause a lot of repetitive work and limit reuse. To solve this, Microsoft has incorporated a new tool, Mobile Reports. This allows you to create an attractive dashboard that can be displayed in any web browser. In addition, you can easily rearrange the dashboard layout to optimize for both phones and tablets. This means you can create your report once, and use it on multiple platforms. Below are three images of the same mobile report. The first was done via a web browser, the second on a tablet, and the final one on a phone: Paginated reports Traditional SSRS reports have now been renamed Paginated Reports, and are still a critical element in reporting. These provide the detailed information needed for day to day activities in your company. Paginated reports have received several enhancements. First, there are two new chart types, Sunburst and TreeMap. Reports may now be exported to a new format, PowerPoint. Additionally, all reports are now rendered in HTML 5 format. This makes them accessible to any browser, including those running on tablets or other platforms such as Linux or the Mac. PowerBI PowerBI Desktop reports may now be housed within the Report Portal. Currently, opening one will launch the PowerBI desktop application.However, Microsoft has announced in an upcoming update to SSRS 2016 PowerBI reports will be displayed directly within the Report Portal without the need to open the external app. Reporting applications Speaking of Apps, the Report Builder has received a facelift, updating it to a more modern user interface with a color scheme that matches the Report Portal. Report Builder has also been decoupled from the installation of SQL Server. In previous versions Report Builder was part of the SQL Server install, or it was available as a separate download. With SQL Server 2016, both the Report Builder and the Mobile Reporting tool are separate downloads making them easier to stay current as new versions are released. The Report Portal now contains links to download these tools. Excel Excel workbooks, often used as a reporting tool itself, may now be housed within the Report Portal. Opening them will launch Excel, similar to the way in which PowerBI reports currently work. Summary This article summarizes just some of the many new enhancements to SQL Server 2016 Reporting Services. With this release, Microsoft has worked toward meeting the needs of many users in the corporate environment, including the need for mobile reporting, dashboards, and enhanced paginated reports. For more details about these and many more features see the book SQL Server 2016 Reporting Services Cookbook, by Dinesh Priyankara and Robert C. Cain. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Where Is My Data and How Do I Get to It? [article] Configuring and Managing the Mailbox Server Role [article]
Read more
  • 0
  • 0
  • 2486

article-image-event-detection-news-headlines-hadoop
Packt
08 Dec 2016
13 min read
Save for later

Event detection from the news headlines in Hadoop

Packt
08 Dec 2016
13 min read
In this article by Anurag Shrivastava, author of Hadoop Blueprints, we will be learning how to build a text analytics system which detects the specific events from the random news headlines. Internet has become the main source of news in the world. There are thousands of website which constantly publish and update the news stories around the world. Not every news items is relevant for everyone but some news items are very critical for some people or businesses. For example, if you were major car manufacturer based in Germany having your suppliers located in India then you would be interested in the news from the region which can affect your supply chain. (For more resources related to this topic, see here.) Road accidents in India are a major social and economic problem. Road accidents leave a large number of fatalities behind and result in the loss of capital. In this example, we will build a system which detects if a news item refers to a road accident event. Let us define what we mean by it in the next paragraph. A road accident event may or may not result in fatal injuries. One or more vehicles and pedestrians may be involved in the accidents. A non road accident event news item is everything else which can not be categorized as a road accident event. It could be a road accident trend analysis related to road accidents or something totally unrelated. Technology stack To build this system, we will use the following technologies: Task Technology Data storage HDFS Data processing Hadoop MapReduce Query engine Hive and Hive UDF Data ingestion Curl and HDFS copy Event detection OpenNLP The event detection system is a machine learning based natural language processing system. The natural language processing system brings the intelligence to detect the events in the random headline sentences from the news items. An OpenNLP OpenSourceNaturalLanguageProcessingFramework (OpenNLP) is from apache software foundation. You can download the version 1.6.0 from https://opennlp.apache.org/ to run the examples in this blog. It is capable of detecting the entities, document categories, parts of speech, and so on in the text written by humans. We will use document categorization feature of OpenNLP in our system. Document categorization feature requires you to train the OpenNLP model with the help of sample text. As a result of training, we get a model. This resulting model is used to categorize the new text. Our training data looks as follows: r 1.46 lakh lives lost on Indian roads last year - The Hindu. r Indian road accident data | OpenGovernmentData (OGD) platform... r 400 people die everyday in road accidents in India: Report - India TV. n Top Indian female biker dies in road accident during country-wide tour. n Thirty die in road accidents in north India mountains—World—Dunya... n India's top woman biker Veenu Paliwal dies in road accident: India... r Accidents on India's deadly roads cost the economy over $8 billion... n Thirty die in road accidents in north India mountains (The Express) The first column can take two values: n indicates that the news item is a road accident event r indicates that the news item is not a road accident event or everything else This training set has total 200 lines. Please note that OpenNLP requires at least 15000 lines in the training set to deliver good results. Because we do not have so much training data, we will start with a small set but remain aware about the limitations of our model. You will see that even with a small training dataset, this model works reasonably well. Let us train and build our model: $ opennlp DoccatTrainer -model en-doccat.bin -lang en -data roadaccident.train.prn -encoding UTF-8 Here the file roadaccident.train.prn contains the training data. The output file en-doccat.bin contains the model which we will use in our data pipeline. We have built our model using the command line utility but it is also possible to build the model programmatically. The training data file is a plain text file, which you can expand with a bigger corpus of knowledge to make the model smarter. Next we will build the data pipeline as follows: Fetch RSS feeds This component will fetch RSS news feeds from the popular news web sites. In this case, we will just use one news from Google. We can always add more sites after our first RSS feed has been integrated. The whole RSS feed can be downloaded using the following command: $ curl "https://news.google.com/news?cf=all&hl=en&ned=in&topic=n&output=rss" The previous command downloads the news headline for India. You can customize the RSS feed by visiting the Google news site is https://news.google.com for your region. Scheduler Our scheduler will fetch the RSS feed once in 6 hours. Let us assume that in 6 hours time interval, we have good likelihood of fetching fresh news items. We will wrap our feed fetching script in a shell file and invoke it using cron. The script is as follows: $ cat feedfetch.sh NAME= "newsfeed-"`date +%Y-%m-%dT%H.%M.%S` curl "https://news.google.com/news?cf=all&hl=en&ned=in&topic=n&output=rss" > $NAME hadoop fs -put $NAME /xml/rss/newsfeeds Cron job setup line will be as follows: 0 */6 * * * /home/hduser/mycommand Please edit your cron job table using the following command and add the setup line in it: $ cronjob -e Loading data in HDFS To load data in HDFS, we will use HDFS put command which copies the downloaded RSS feed in a directory in HDFS. Let us make this directory in HDFS where our feed fetcher script will store the rss feeds: $ hadoop fs -mkdir /xml/rss/newsfeeds Query using Hive First we will create an external table in Hive for the new RSS feed. Using Xpath based select queries, we will extract the news headlines from the RSS feeds. These headlines will be passed to UDF to detect the categories: CREATE EXTERNAL TABLE IF NOT EXISTS rssnews( document STRING) COMMENT 'RSS Feeds from media' STORED AS TEXTFILE location '/xml/rss/newsfeeds'; The following command parses the XML to retrieve the title or the headlines from XML and explodes them in a single column table: SELECT explode(xpath(name, '//item/title/text()')) FROM xmlnews1; The sample output of the above command on my system is as follows: hive> select explode(xpath(document, '//item/title/text()')) from rssnews; Query ID = hduser_20161010134407_dcbcfd1c-53ac-4c87-976e-275a61ac3e8d Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1475744961620_0016, Tracking URL = http://localhost:8088/proxy/application_1475744961620_0016/ Kill Command = /home/hduser/hadoop-2.7.1/bin/hadoop job -kill job_1475744961620_0016 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2016-10-10 14:46:14,022 Stage-1 map = 0%, reduce = 0% 2016-10-10 14:46:20,464 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.69 sec MapReduce Total cumulative CPU time: 4 seconds 690 msec Ended Job = job_1475744961620_0016 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Cumulative CPU: 4.69 sec HDFS Read: 120671 HDFS Write: 1713 SUCCESS Total MapReduce CPU Time Spent: 4 seconds 690 msec OK China dispels hopes of early breakthrough on NSG, sticks to its guns on Azhar - The Hindu Pampore attack: Militants holed up inside govt building; combing operations intensify - Firstpost CPI(M) worker hacked to death in Kannur - The Hindu Akhilesh Yadav's comment on PM Modi's Lucknow visit shows Samajwadi Party's insecurity: BJP - The Indian Express PMO maintains no data about petitions personally read by PM - Daily News & Analysis AIADMK launches social media campaign to put an end to rumours regarding Amma's health - Times of India Pakistan, India using us to play politics: Former Baloch CM - Times of India Indian soldier, who recited patriotic poem against Pakistan, gets death threat - Zee News This Dussehra effigies of 'terrorism' to go up in flames - Business Standard 'Personal reasons behind Rohith's suicide': Read commission's report - Hindustan Times Time taken: 5.56 seconds, Fetched: 10 row(s) Hive UDF Our Hive User Defined Function (UDF) categorizeDoc takes a news headline and suggests if it is a news about a road accident or the road accident event as we explained earlier. This function is as follows: package com.mycompany.app;import org.apache.hadoop.io.Text;import org.apache.hadoop.hive.ql.exec.Description;import org.apache.hadoop.hive.ql.exec.UDF;import org.apache.hadoop.io.Text;import opennlp.tools.util.InvalidFormatException;import opennlp.tools.doccat.DoccatModel;import opennlp.tools.doccat.DocumentCategorizerME;import java.lang.String;import java.io.FileInputStream;import java.io.InputStream;import java.io.IOException;@Description( name = "getCategory", value = "_FUNC_(string) - gets the catgory of a document ")public final class MyUDF extends UDF { public Text evaluate(Text input) { if (input == null) return null; try { return new Text(categorizeDoc(input.toString())); } catch (Exception ex) { ex.printStackTrace(); return new Text("Sorry Failed: >> " + input.toString()); } } public String categorizeDoc(String doc) throws InvalidFormatException, IOException { InputStream is = new FileInputStream("./en-doccat.bin"); DoccatModel model = new DoccatModel(is); is.close(); DocumentCategorizerME classificationME = new DocumentCategorizerME(model); String documentContent = doc; double[] classDistribution = classificationME.categorize(documentContent); String predictedCategory = classificationME.getBestCategory(classDistribution); return predictedCategory; }} The function categorizeDoc take a single string as input. It loads the model which we created earlier from the file en-doccat.bin from the local directory. Finally it calls the classifier which returns the result to the calling function. The calling function MyUDF extends the hive UDF class. It calls the function categorizeDoc for each string line item input. If the it succeed then the value is returned to the calling program otherwise a message is returned which indicates that the category detection has failed. The pom.xml file to build the above file is as follows: $ cat pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xsi_schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mycompany</groupId> <artifactId>app</artifactId> <version>1.0</version> <packaging>jar</packaging> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.7</maven.compiler.source> <maven.compiler.target>1.7</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.7.1</version> <type>jar</type> </dependency> <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-exec</artifactId> <version>2.0.0</version> <type>jar</type> </dependency> <dependency> <groupId>org.apache.opennlp</groupId> <artifactId>opennlp-tools</artifactId> <version>1.6.0</version> </dependency> </dependencies> <build> <pluginManagement> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.8</version> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> <configuration> <archive> <manifest> <mainClass>com.mycompany.app.App</mainClass> </manifest> </archive> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin> </plugins> </pluginManagement> </build> </project> You can build the jar with all the dependencies in it using the following commands: $ mvn clean compile assembly:single The resulting jar file app-1.0-jar-with-dependencies.jar can be found in the target directory. Let us use this jar file in Hive to categorise the news headlines as follows: Copy jar file to the bin subdirectory in the Hive root: $ cp app-1.0-jar-with-dependencies.jar $HIVE_ROOT/bin Copy the trained model in the bin sub directory in the Hive root: $ cp en-doccat.bin $HIVE_ROOT/bin Run the categorization queries Run Hive: $hive Add jar file in Hive: hive> ADD JAR ./app-1.0-jar-with-dependencies.jar ; Create a temporary categorization function catDoc: hive> CREATE TEMPORARY FUNCTION catDoc as 'com.mycompany.app.MyUDF'; Create a table headlines to hold the headlines extracted from the RSS feed: hive> create table headlines( headline string); Insert the extracted headlines in the table headlines: hive> insert overwrite table headlines select explode(xpath(document, '//item/title/text()')) from rssnews; Let's test our UDF by manually passing a real news headline to it from a newspaper website: hive> hive> select catDoc("8 die as SUV falls into river while crossing bridge in Ghazipur") ; OK N The output is N which means this is indeed a headline about a road accident incident. This is reasonably good, so now let us run this function for the all the headlines: hive> select headline, catDoc(*) from headlines; OK China dispels hopes of early breakthrough on NSG, sticks to its guns on Azhar - The Hindu r Pampore attack: Militants holed up inside govt building; combing operations intensify - Firstpost r Akhilesh Yadav Backs Rahul Gandhi's 'Dalali' Remark - NDTV r PMO maintains no data about petitions personally read by PM Narendra Modi - Economic Times n Mobile Internet Services Suspended In Protest-Hit Nashik - NDTV n Pakistan, India using us to play politics: Former Baloch CM - Times of India r CBI arrests Central Excise superintendent for taking bribe - Economic Times n Be extra vigilant during festivals: Centre's advisory to states - Times of India r CPI-M worker killed in Kerala - Business Standard n Burqa-clad VHP activist thrashed for sneaking into Muslim women gathering - The Hindu r Time taken: 0.121 seconds, Fetched: 10 row(s) You can see that our headline detection function works and output r or n. In the above example, we see many false positives where a headline has been incorrectly identified as a road accident. A better training for our model can improve the quality of our results. Further reading The book Hadoop Blueprints covers several case studies where we can apply Hadoop, HDFS, data ingestion tools such as Flume and Sqoop, query and visualization tools such as Hive and Zeppelin, machine learning tools such as BigML and Spark to build the solutions. You will discover how to build a fraud detection system using Hadoop or build a Data Lake for example. Summary In this article we have learned to build a text analytics system which detects the specific events from the random news headlines. This also covers how to apply Hadoop, HDFS, and other different tools. Resources for Article: Further resources on this subject: Spark for Beginners [article] Hive Security [article] Customizing heat maps (Intermediate) [article]
Read more
  • 0
  • 0
  • 1609

article-image-simple-content-pipeline-make
Ryan Roden-Corrent
08 Dec 2016
5 min read
Save for later

A simple content pipeline with Make

Ryan Roden-Corrent
08 Dec 2016
5 min read
Many game engines have the concept of a content pipeline. Your project includes a collection of assets like images, sounds, and music. These may be stored in one format that you use for development but translated into another format that gets packaged along with the game. The content pipeline is responsible for this translation. If you are developing a small game, the overhead of a large-scale game engine may be more than what you want to deal with. For those who prefer a minimalistic work environment, a relatively simple Make file can serve as your content pipeline. It took a few game projects for me to set up a pipeline that I was happy with, and looking back, I really wish I had a post like this to get me started. I'm hoping this will be specific enough to get you started but generic enough to be adaptable to your needs! The setup Suppose you are making a game and you use Aseprite to create pixel art and MMPZ LMMS to compose music, your game's file structure would look like this: - src/ - ... source code ... - content/ - song1.mmpz - song2.mmpz - ... - image1.ase - image2.ase - ... - bin/ - game - song1.ogg - song2.ogg - ... - image1.png - image2.png - ... src contains your source code—the language is irrelevant for this discussion. content contains the work-in-progress art and music for your game. They are saved in the source formats for Aseprite and LMMS (.ase and .mmpz, respectively). The bin folder represents the actual game "package"—the thing you would distribute to those who want to play your game. bin/game represents the executable built from the source files. bin/ also contains playable .ogg files that are exported from the corresponding .mmpz files. Similarly, bin contains .png files that are built from the corresponding .ase files. We want to automate the process of exporting the content files into their game-ready format. The Makefile I'll start by showing the example Makefile and then explain how it works: CONTENT_DIR = content BIN_DIR = bin IMAGE_FILES := $(wildcard $(CONTENT_DIR)/*.ase) MUSIC_FILES := $(wildcard $(CONTENT_DIR)/*.mmpz) all: code music art code: bin_dir # build code here ... bin_dir: @mkdir -p $(BIN_DIR) art: bin_dir $(IMAGE_FILES:$(CONTENT_DIR)/%.ase=$(BIN_DIR)/%.png) $(BIN_DIR)/%.png : $(CONTENT_DIR)/%.ase @echo building image $* @aseprite --batch --sheet $(BIN_DIR)/$*.png $(CONTENT_DIR)/$*.ase --data /dev/null music: bin_dir $(MUSIC_FILES:$(CONTENT_DIR)/%.mmpz=$(BIN_DIR)/%.ogg) $(BIN_DIR)/%.ogg : $(CONTENT_DIR)/%.mmpz @echo building song $* lmms -r $(CONTENT_DIR)/$*.mmpz -f ogg -b 64 -o $(BIN_DIR)/$*.ogg clean: $(RM) -r $(BIN_DIR) The first rule (all) will be run when you just type make. This depends on code, music, and art. I won't get into the specifics of code, as that will differ depending on the language you use. Whatever the code is, it should build your source code into an executable that gets placed in the bin directory. You can see that code, art, and music all depend on bin_dir, which ensures that the bin folder exists before we try to build anything. Let's take a look at how the art rule works. At the top of the file, we define IMAGE_FILES := $(wildcard $(CONTENT_DIR)/*.ase). This uses a wildcard search to collect the names of all the .ase files in our content directory. The expression $(IMAGE_FILES:$(CONTENT_DIR)/%.ase=$(BIN_DIR)/%.png) says that for every .ase file in the content directory, we want a corresponding .png file in bin. The rule below that provides a recipe for building a single .png from a single .ase: $(BIN_DIR)/%.png : $(CONTENT_DIR)/%.ase @echo building image $* @aseprite --batch --sheet $(BIN_DIR)/$*.png $(CONTENT_DIR)/$*.ase --data /dev/null That is, for every png file we want in bin, we need to find a matching ase file in content and invoke the given aseprite command on it. The music rule works pretty much the same way, but for .mmpz and .ogg files instead. Now you can run make music to build music files, make art to build art files, or just make to build everything. As all the resulting content ends up in bin, the clean rule just removes the bin directory. Advantages You don't have to remember to export content every time you work on it. Without a system like this, you would typically have to save whatever you are working on to a source file (e.g. a .mmpz file for LMMS) and export it to the output format (e.g. .ogg). This is tedious and the second part is easy to forget. If you are using a version control system (and you should be!), it doesn't have to track the bin directory, as it can be generated just by running make. For git, this means you can put bin/ in.gitignore, which is a huge advantage as git doesn't handle large binary files well. It is relatively easy to create a distributable package for your game. Just run make and compress the bin directory. Summary I hope this illuminated how a process that is typically wrapped up in the complexity of a large-scale game engine can be made quite simple. While I used LMMS and Aseprite as specific examples, this method can be easily adapted to any content-creation programs that have a command-line tool you can use to export files. About the author Ryan Roden-Corrent is a software developer by trade and hobby. He is an active contributor in the free/opensource software community and has a passion for simple but effective tools. He started gaming at a young age and dabbles in all aspects of game development, from coding to art and music. He's also an aspiring musician and yoga teacher. You can find his open source work at here and Creative Commons art at here.
Read more
  • 0
  • 0
  • 1574
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-gathering-and-analyzing-stock-market-data-r-part-1-2
Erik Kappelman
07 Dec 2016
6 min read
Save for later

Gathering and analyzing stock market data with R Part 1 of 2

Erik Kappelman
07 Dec 2016
6 min read
This two-part blogseries walks through a set of R scripts used to collect and analyze data from the New York Stock Exchange. Collecting data in realtime from the stock market can be valuable in at least two ways. First, historical intraday trading data is valuable. There are many companies you can find around the Web that sell historical intraday trading data. This data can be used to make quick investment decisions. Investment strategies like day trading and short selling rely on being able to ride waves in the stock market that might only last a few hours or minutes. So, if a person could collect daily trading data for a time long enough, this data would eventually become valuable and could be sold. While almost any programming language can be used to collect data from the Internet, using R to collect stock market data is somewhat more convenient if R will be used to analyze and make predictions with the data. Additionally, I find R to be an intuitive scripting language that can be used for a wide range of solutions. I will first discuss how to create a script that can collect intraday trading data. I will then discuss using R to collect historical daily trading data. I will also discuss analyzing this data and making predictions from it. There is a lot of ground to cover, so this post is split into two parts. All of the code and accompanying files can be found in this repository. So, let’s get started. If you don’t have the R binaries installed, go ahead and get them as they are going to be a must for following along. Additionally, I would highly recommend using RStudio in development projects centered around R. Although there are absolutely flaws with RStudio, in my opinion, it is the best choice. library(httr) library(jsonlite) source('DataFetch.R') The above three lines source the file containing the functions that actually collect the data and load the required packages to execute their commands. Libraries are a common feature in R. Before you try to do something too complex, make sure that you check whether there is an existing library that already performs the operation. The R community is extensive and thriving, which makes using R for development that much better. Sys.sleep(55*60) frame.list<-list() ticker<-function(rest.time){ ptm<-proc.time() df<-data.frame(get.data(),Date= date()) timer.time<-proc.time()-ptm Sys.sleep(as.numeric(rest.time-timer.time[3])) return(list(df)) } The next lines of code stop the system until it is time for the stock market to open. I start this script before I go to work in the morning. So, 55*60 is about how many seconds pass between when I leave for work and the market opens. We then initialize an empty list using the next line of code. If you are new to R, you will notice the use of an arrow instead of an equals sign. Although the equals sign does work, many people, including me, use the arrow. This list is going to hold the dataframes containing the stock data that is created throughout the day. We then initialize the ticker function, which is used to repeatedly call the set of functions that retrieve the data and then return the data in the form of a dataframe. for(i in1:80){ frame.list<-c(suppressWarnings(ticker(5*30)),frame.list) } save(frame.list,file="RealTimeData.rda") The ticker function takes the number of seconds to wait between queries to the market as its only argument. This number is modified based on the length of time the query takes. This ensures that the timing of the data points is consistent. The ticker function is called eighty times in five minute intervals. The results are appended onto the list of dataframes. After the for-loop is completed. The data is saved in the R format. Now let’s look into the functions that fetch the data located in DataFetch.R. R code can become pretty verbose, so it is good to get in the habit of segmenting your code into multiple files. The functions used to fetch data are displayed below. We will start by discussing the parse.data function because it is the work horse, and the get.data function is more of a controller. parse.data<-function(symbols,range){ base.URL <-"http://finance.google.com/finance/info?client=ig&q=" start= min(range) end= max(range) symbol.string<-paste0("NYSE:",symbols[start],",") for(i in(start+1):end){ temp<- paste0("NYSE:",symbols[i],",") symbol.string<-paste(symbol.string,temp,sep="") } URL <-paste(base.URL,symbol.string,sep="") data<- GET(URL) now<- date() bin<- content(data,"raw") writeBin(bin,"data.txt") conn<- file("data.txt",open="r") linn<-readLines(conn) jstring<-"[" for(i in3:length(linn)){ jstring<- paste0(jstring,linn[i]) } close(conn) file.remove("data.txt") obj<-fromJSON(jstring) return(data.frame(Symbol=obj$t,Price=as.numeric(obj$l))) } The first function takes a list of stock symbols and the list indices of the symbols that are to be queried. The function then builds a string in the proper format to be used to query Google Finance for the latest price information on the chosen symbols. The query is performed using the ‘httr’ R package, a package used to perform HTTP tasks. The response from the web request is shuttled through a few formats in order to get the data into an easy-to-use format. The function then returns a dataframe containing the symbols and prices. get.data<-function(){ syms<- read.csv("NYSE.txt",header=2,sep="t") sb<-grep("[A-Z]{4}|[A-Z]{3}",syms$Symbol,perl= F, value = T) result<- c() in.list<-list() list.seq<-seq(1,2901,100) for(i in1:(length(list.seq)-1)){ range<-list.seq[i]:list.seq[i+1] result<-rbind(result,parse.data(sb,range)) } return(droplevels.data.frame(na.omit(result))) } The get.data function above is called by the ticker function. It serves as a controller on the parse.data function by calling for the prices in chunks so that the queries are small enough. It also reads the symbol list in from the "NYSE.txt" file, which is a simple list of stocks in the New York Stock Exchange and their symbols. The symbols are then put through a RegEx routine that eliminates symbols that do not follow the right format for Google Finance. Gathering intraday data from the stock market using R, or any language, is obviously somewhat of a pain; however, if properly executed, the results could be quite useful and valuable. I hope you read part two of this blog series where we use R to gather and analyze historical stock market data. About the author Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 6845

Packt
07 Dec 2016
6 min read
Save for later

An Architect’s Critical Competencies

Packt
07 Dec 2016
6 min read
In this article by Sameer Paradkar, the author of the book Cracking the IT Architect Interview, gives the information into a single reference guide that will save time prior to interviews and can be a ready reference for important topics that need to be revised before the interviews. (For more resources related to this topic, see here.) A good architect is one who leads by example, and without a good understanding of the technology stack and business domain, an architect is not equipped to deliver the pre-requisite outcomes for the enterprise. The team members typically have deep-dive expertise in the specific technology areas but will lack confidence in the architect if he does not have the competencies in the domain or technology. The architect is the bridge between the technology and the business team, and hence he/she must understand all aspects of the technology stack to be able to liaison with the business. The architect must be conversant in the business domain in order to drive the team and all the stakeholders toward a common organizational goal. An architect might not be busy all the time, but he/she leverages decades of expertise to solve and monitor the organizational IT landscape, making quick decisions during various stages of the SDLC. The project manager handles the people management aspects, freeing the architect of the hassles of operational tasks. An excellent architect is pretty much a hands-on person and should be able to mentor members of the design and implementation teams. He should be knowledgeable and competent to handle any complex situation. An architect’s success in interviews does not come easily. One has to spend hours prior to each interview, wading through various books and references for preparation. The motivation for this book was to consolidate all this information into a single reference guide that will save time prior to interviews and can be a ready reference for important topics that need to be revised before the interviews. Leadership: The architect has to make decisions and take ownership, and a lot of times, the right choice is not simple. The architect needs to find a solution that works, and it may not always be the best alternative on technical merits but it should work best in the given situation. To take such decisions, the architect must have an excellent understanding of the cultural and political environments within the organizations and should have the ability to generate buy-in from the key stakeholders. Strategic Mindset: This is the ability of an architect to look at things from a 10,000-foot elevation, at a strategic level, isolating the operational nuances. This requires creating an organizational vision such as making the product a market leader and then dividing it into achievable objectives to make it simpler for all the stakeholders to achieve these results. Architects are often tasked upon finding an alternative solution that provides the best ROI to the organization and creating a business case for getting sponsorships. Architects often work with top-level executives such as CEO, CTO, and CIO, where it is necessary to create and present strategic architectures and roadmaps for organizations. Domain Knowledge: It is a critical aspect to understand the problem domain before creating and defining a solution. It is also a mandatory requirement to be knowledgeable about the domain-specific requirements, such as legal and regulatory requirements. A sound domain understanding is not only essential for understanding the requirements and evangelizing the target state but also helps in articulating the right decisions. The architect must be able speak the business vocabulary and draw experiences from the domain to be able to have meaningful discussions with the business stakeholders. Technical Acumen: This is a key competency as architects are hired for their technical expertise and acumen. The architect should have a breadth of expertise in technologies and platforms to understand their strengths and weaknesses and make the right decisions. Even for technical architect roles, it is mandatory to have skills in multiple technology stacks and frameworks and to be knowledgeable about technology trends. Architects’ growth paths Software architecture discipline has matured since its inception. This architecting practice is no longer reserved for the veteran practitioners. The core concepts and principles of this discipline can now be acquired in training programs, books and college curriculum. The discipline is turning from an art into a competency accessible through training and experience. A significant number of methodologies, frameworks and processes have been developed to support various perspectives of the architecture practice. A software architect is responsible for creating  most appropriate architecture for the enterprise or system to suit the business goals, fulfill user requirements, and achieve the desired business outcome. A software architect’s career starts with a rigorous education of computer science. An architect is liable for making the hardest decisions on software architecture and design. Hence he must have a sound understanding of the concepts, patterns, and principles independent of any programming languages. There are a number of architect flavors that exist: enterprise architect, business architect, business strategy architect, solution architect, infrastructure architect, security architect, integration architect, technical architect, systems architect and software designer. There are other variations as well, but this section describes the previously mentioned flavors in more detail. Finally, for an architect, learning must never stop. Continuous participation in the communities and learning about new technologies, methodologies, and frameworks are mandatory for value creation and to stay ahead of the demand curve. The following section describes different roles basis the breadth against depth: Summary Individual passion is the primary driving factor that determines the growth path of an Architect. For instance, a security architect who is passionate about the domain of IT security and must have developed an immensely valuable body of knowledge over time should ideally not be coerced into seeing a shift to a solution architect and eventually a governance role. Resources for Article: Further resources on this subject: Opening up to OpenID with Spring Security [article] Thinking Functionally [article] Setting up Development Environment for Android Wear Applications [article]
Read more
  • 0
  • 0
  • 1506

article-image-build-chatbot
Packt
07 Dec 2016
23 min read
Save for later

Build a Chatbot

Packt
07 Dec 2016
23 min read
In this article written by Alexander T. Combs, author of the book Python Machine Learning Blueprints, we are going to learn how to construct a chatbot from scratch. Along the way, we'll learn more about the history of the field and its future prospects. We'll cover the following topics in this article: The Turing test The history of chatbots The design of chatbots Building a chatbot (For more resources related to this topic, see here.) The Turing test In 1950, Alan Turing posed the question, “Can machines think?” in one of his more famous papers. Being the practical genius that he was, he decided not to tackle this question head on but to pose it in the framework of the “problem of other minds” instead. This problem poses the question, “How do we truly know that other people have minds like our own?”. As we can only observe their behavior and not the inner workings of their mind, we must take it on faith that they are like us. Turing proposed that if a machine behaved as if it was intelligent, then in fact, we should view it as such. This, in a nutshell, is the Turing test. Passing this test means convincing humans that a machine is a fellow human. You may wonder at this point whether a program ever passed the Turing test successfully. The answer is that there really is no official Turing test. The closest thing to an official event is the Loebner Prize. This is a contest held annually that awards modest prizes to chatbots that appear to be the most convincingly human to a panel of judges. If this sounds rigorous and academic, it has been described as anything but. Loebner's sponsorship for the contest is said to have been a product of his total disdain for labor of any kind. Hopefully, by this point, you realized that any claims about “passing the Turing test” are more spectacle than actual science. However, despite this, Turing's original concept is a useful construct. The history of chatbots I still remember the day my father sat me down in front of his IBM XT with its black and green flickering monitor and asked me to talk to “it”. The conversation went something like the following: >Hello? How are you feeling today? >Hi. Who is this? >Hi there… how are you today? >I'm fine. You didn't tell me who you are. >We should be discussing you, not me. >We should? Why? >Let's change focus a bit… Tell me about your family. >You are impossible to have a conversation with. >Can you elaborate on that? >You are killing me. >Perhaps you would like me to be killing you. >Ok, you are freaking me out. >Perhaps you're really talking about yourself? >What?? That makes no sense. >How would an answer to that help you? >END >Very Interesting. >QUIT >Thank you for talking with me. >#$###$#@@…. The “it” that I talked to was a program called ELIZA. Written in the mid-1960s at MIT's Artificial Intelligence Lab by Joseph Weizenbaum, ELIZA was created to mimic the responses of a Rogerian psychotherapist. Though nearly comical when examined in any depth, the program was capable of convincing some users that they were chatting with an actual human. This was a remarkable feat considering it was a scant 200 lines of code that used randomization and regular expressions to parrot back responses. Even today, this simple program remains a staple of popular culture. If you ask Siri who ELIZA is, she will tell you she is a friend and brilliant psychiatrist. If ELIZA was an early example of chatbots, what have we seen after this? In recent years, there has been an explosion of new chatbots; most notable of these is Cleverbot. Cleverbot was released to the world via the web in 1997. Since then, this bot has racked up hundreds of millions of conversions. Unlike early chatbots, Cleverbot (as the name suggests) appears to become more intelligent with each conversion. Though the exact details of the workings of the algorithm are difficult to find, it is said to work by recording all conversations in a database and finding the most appropriate response by identifying the most similar questions and responses in the database. I made up a nonsensical question in the following screenshot, and you can see that it found something similar to the object of my question in terms of a string match. I persisted: Again I got something…similar? You'll also notice that topics can persist across the conversation. In response to my answer, I was asked to go into more detail and justify my answer. This is one of the things that appears to make Cleverbot, well, clever. While chatbots that learn from humans can be quite amusing, they can also have a darker side. Just this past year, Microsoft released a chatbot named Tay on Twitter. People were invited to ask questions of Tay, and Tay would respond in accordance with her “personality”. Microsoft had apparently programmed the bot to appear to be 19-year-old American girl. She was intended to be your virtual “bestie”; the only problem was she started sounding like she would rather hang with the Nazi youth than you. As a result of these unbelievably inflammatory tweets, Microsoft was forced to pull Tay off Twitter and issue an apology: “As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.” -March 25, 2016 Official Microsoft Blog Clearly, brands that want to release chatbots into the wild in the future should take a lesson from this debacle. There is no doubt that brands are embracing chatbots. Everyone from Facebook to Taco Bell is getting in on the game. Witness the TacoBot: Yes, this is a real thing, and despite the stumbles such as Tay, there is a good chance the future of UI looks a lot like TacoBot. One last example might even help explain why. Quartz recently launched an app that turns news into a conversation. Rather than lay out the day's stories as a flat list, you are engaged in a chat as if you were getting news from a friend. David Gasca, a PM at Twitter, describes his experience using the app in a post on Medium. He describes how the conversational nature invoked feelings that were normally only triggered in human relationships. This is his take on how he felt when he encountered an ad in the app: "Unlike a simple display ad, in a conversational relationship with my app, I feel like I owe something to it: I want to click. At the most subconscious level, I feel the need to reciprocate and not let the app down: The app has given me this content. It's been very nice so far and I enjoyed the GIFs. I should probably click since it's asking nicely.” If this experience is universal—and I expect that it is—this could be the next big thing in advertising, and have no doubt that advertising profits will drive UI design: “The more the bot acts like a human, the more it will be treated like a human.” -Mat Webb, technologist and co-author of Mind Hacks At this point, you are probably dying to know how these things work, so let's get on with it! The design of chatbots The original ELIZA application was two-hundred odd lines of code. The Python NLTK implementation is similarly short. An excerpt can be seen at the following link from NLTK's website (http://www.nltk.org/_modules/nltk/chat/eliza.html). I have also reproduced an except below: # Natural Language Toolkit: Eliza # # Copyright (C) 2001-2016 NLTK Project # Authors: Steven Bird <stevenbird1@gmail.com> # Edward Loper <edloper@gmail.com> # URL: <http://nltk.org/> # For license information, see LICENSE.TXT # Based on an Eliza implementation by Joe Strout <joe@strout.net>, # Jeff Epler <jepler@inetnebr.com> and Jez Higgins <mailto:jez@jezuk.co.uk>. # a translation table used to convert things you say into things the # computer says back, e.g. "I am" --> "you are" from future import print_function # a table of response pairs, where each pair consists of a # regular expression, and a list of possible responses, # with group-macros labelled as %1, %2. pairs = ((r'I need (.*)',("Why do you need %1?", "Would it really help you to get %1?","Are you sure you need %1?")),(r'Why don't you (.*)', ("Do you really think I don't %1?","Perhaps eventually I will %1.","Do you really want me to %1?")), [snip](r'(.*)?',("Why do you ask that?", "Please consider whether you can answer your own question.", "Perhaps the answer lies within yourself?", "Why don't you tell me?")), (r'quit',("Thank you for talking with me.","Good-bye.", "Thank you, that will be $150. Have a good day!")), (r'(.*)',("Please tell me more.","Let's change focus a bit... Tell me about your family.","Can you elaborate on that?","Why do you say that %1?","I see.", "Very interesting.","%1.","I see. And what does that tell you?","How does that make you feel?", "How do you feel when you say that?")) ) eliza_chatbot = Chat(pairs, reflections) def eliza_chat(): print("Therapistn---------") print("Talk to the program by typing in plain English, using normal upper-") print('and lower-case letters and punctuation. Enter "quit" when done.') print('='*72) print("Hello. How are you feeling today?") eliza_chatbot.converse() def demo(): eliza_chat() if name demo() == " main ": As you can see from this code, input text was parsed and then matched against a series of regular expressions. Once the input was matched, a randomized response (that sometimes echoed back a portion of the input) was returned. So, something such as I need a taco would trigger a response of Would it really help you to get a taco? Obviously, the answer is yes, and fortunately, we have advanced to the point that technology can provide one to you (bless you, TacoBot), but this was still in the early days. Shockingly, some people did actually believe ELIZA was a real human. However, what about more advanced bots? How are they constructed? Surprisingly, most of the chatbots that you're likely to encounter don't even use machine learning; they use what's known as retrieval-based models. This means responses are predefined according to the question and the context. The most common architecture for these bots is something called Artificial Intelligence Markup Language (AIML). AIML is an XML-based schema to represent how the bot should interact to the user's input. It's really just a more advanced version of how ELIZA works. Let's take a look at how responses are generated using AIML. First, all inputs are preprocessed to normalize them. This means when you input “Waaazzup???”, it is mapped to “WHAT IS UP”. This preprocessing step funnels down the myriad ways of saying the same thing into one input that can run against a single rule. Punctuation and other extraneous inputs are removed as well at this point. Once this is complete, the input is matched against the appropriate rule. The following is a sample template: <category> <pattern>WHAT IS UP</pattern> <template>The sky, duh. Pfft. Humans...</template> </category> This is the basic setup, but you can also layer in wildcards, randomization, and prioritization schemes. For example, the following pattern uses wildcard matching: <category> <pattern>* FOR ME<pattern> <template>I'm a bot. I don't <star/>. Ever.</template> </category> Here, the *wildcard matches one or more words before FORMEand then repeats these back in the output template. If the user were to type in Danceforme!, the response would be, I'm a bot. I don't dance. Ever. As you can see, these rules don't make for anything that approximates any type of real intelligence, but there are a few tricks that strengthen the illusion. One of the better ones is the ability to generate responses conditioned to a topic. For example, here is a rule that invokes a topic: <category> <pattern>I LIKE TURTLES</pattern> <template>I feel like this whole <set name="topic">turle</set> thing could be a problem. What do you like about them? </template> </category> Once the topic is set, then the rules specific to that context can be matched: <topic name="turtles"> <category> <pattern>* SHELL IS *</pattern> <template>I dislike turtles primarily because of their shells. What other creepy things do you like about turtles? </template> </category> <category> <pattern>* HIDE *</pattern> <template>I wish like a turtle that I could hide from this conversation.</template> </category> </topic> Let's see what this interaction looks like: >I like turtles! >I feel like this whole turtle thing could be a problem. What do you like about them? >I like how they hide in their shell >I wish like a turtle I could hide from this conversation. You can see that the continuity across the conversation adds a measure of realism. You probably think that this can't be state-of-the-art in this age of deep learning, and you're right. While most bots are rule-based, the next generation of chatbots are emerging, and they are based on neural networks. In 2015, Oriol Vinyas and Quoc Le of Google published a paper (http://arxiv.org/pdf/1506.05869v1.pdf), which described the construction of a neural network, based on sequence-to-sequence models. This type of model maps an input sequence, such as “ABC”, to an output sequence, such as “XYZ”. These inputs and outputs can be translations from one language to another for example. However, in the case of their work here, the training data was not language translation, but rather tech support transcripts and movie dialog. While the results from both models are both interesting, it was the interactions that were based on movie model that stole the headlines. The following are sample interactions taken from the paper: None of this was explicitly encoded by humans or present in a training set as asked, and yet, looking at this is, it is frighteningly like speaking with a human. However, let's see more… Note that the model responds with what appears to be knowledge of gender (he, she), of place (England), and career (player). Even questions of meaning, ethics, and morality are fair game: The conversation continues: If this transcript doesn't give you a slight chill of fear for the future, there's a chance you may already be some sort of AI. I wholeheartedly recommend reading the entire paper. It isn't overly technical, and it will definitely give you a glimpse of where this technology is headed. We talked a lot about the history, types, and design of chatbots, but let's now move on to building our own! Building a chatbot Now, having seen what is possible in terms of chatbots, you most likely want to build the best, most state-of-the-art, Google-level bot out there, right? Well, just put that out of your mind right now because we will do just the opposite! We will build the best, most awful bot ever! Let me tell you why. Building a chatbot comparable to what Google built takes some serious hardware and time. You aren't going to whip up a model on your MacBook Pro that takes anything less than a month or two to run with any type of real training set. This means that you will have to rent some time on an AWS box, and not just any box. This box will need to have some heavy-duty specs and preferably be GPU-enabled. You are more than welcome to attempt such a thing. However, if your goal is just to build something very cool and engaging, I have you covered here. I should also warn you in advance, although Cleverbot is no Tay, the conversations can get a bit salty. If you are easily offended, you may want to find a different training set. Ok, let's get started! First, as always, we need training data. Again, as always, this is the most challenging step in the process. Fortunately, I have come across an amazing repository of conversational data. The notsocleverbot.com site has people submit the most absurd conversations they have with Cleverbot. How can you ask for a better training set? Let's take a look at a sample conversation between Cleverbot and a user from the site: So, this is where we'll begin. We'll need to download the transcripts from the site to get started: You'll just need to paste the link into the form on the page. The format will be like the following: http://www.notsocleverbot.com/index.php?page=1. Once this is submitted, the site will process the request and return a page back that looks like the following: From here, if everything looks right, click on the pink Done button near the top right. The site will process the page and then bring you to the following page: Next, click on the Show URL Generator button in the middle: Next, you can set the range of numbers that you'd like to download from. For example, 1-20, by 1 step. Obviously, the more pages you capture, the better this model will be. However, remember that you are taxing the server, so please be considerate. Once this is done, click on Add to list and hit Return in the text box, and you should be able to click on Save. It will begin running, and when it is complete, you will be able to download the data as a CSV file. Next, we'll use our Jupyter notebook to examine and process the data. We'll first import pandasand the Python regular expressions library, re. We will also set the option in pandasto widen our column width so that we can see the data better: import pandas as pd import re pd.set_option('display.max_colwidth',200) Now, we'll load in our data: df = pd.read_csv('/Users/alexcombs/Downloads/nscb.csv') df The preceding code will result in the following output: As we're only interested in the first column, the conversation data, we'll parse this out: convo = df.iloc[:,0] convo The preceding code will result in the following output: You should be able to make out that we have interactions between User and Cleverbot, and that either can initiate the conversation. To get the data in the format that we need, we'll have to parse it into question and response pairs. We aren't necessarily concerned with who says what, but we are concerned with matching up each response to each question. You'll see why in a bit. Let's now perform a bit of regular expression magic on the text: clist = [] def qa_pairs(x): cpairs = re.findall(": (.*?)(?:$|n)", x) clist.extend(list(zip(cpairs, cpairs[1:]))) convo.map(qa_pairs); convo_frame = pd.Series(dict(clist)).to_frame().reset_index() convo_frame.columns = ['q', 'a'] The preceding code results in the following output: Okay, there's a lot of code there. What just happened? We first created a list to hold our question and response tuples. We then passed our conversations through a function to split them into these pairs using regular expressions. Finally, we set it all into a pandas DataFramewith columns labelled qand a. We will now apply a bit of algorithm magic to match up the closest question to the one a user inputs: from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity vectorizer = TfidfVectorizer(ngram_range=(1,3)) vec = vectorizer.fit_transform(convo_frame['q']) What we did in the preceding code was to import our TfidfVectorizationlibrary and the cosine similarity library. We then used our training data to create a tf-idf matrix. We can now use this to transform our own new questions and measure the similarity to existing questions in our training set. We covered cosine similarity and tf-idf algorithms in detail, so flip back there if you want to understand how these work under the hood. Let's now get our similarity scores: my_q = vectorizer.transform(['Hi. My name is Alex.']) cs = cosine_similarity(my_q, vec) rs = pd.Series(cs[0]).sort_values(ascending=0) top5 = rs.iloc[0:5] top5 The preceding code results in the following output: What are we looking at here? This is the cosine similarity between the question I asked and the top five closest questions. To the left is the index and on the right is the cosine similarity. Let's take a look at these: convo_frame.iloc[top5.index]['q'] This results in the following output: As you can see, nothing is exactly the same, but there are definitely some similarities. Let's now take a look at the response: rsi = rs.index[0] rsi convo_frame.iloc[rsi]['a'] The preceding code results in the following output: Okay, so our bot seems to have an attitude already. Let's push further. We'll create a handy function so that we can test a number of statements easily: def get_response(q): my_q = vectorizer.transform([q]) cs = cosine_similarity(my_q, vec) rs = pd.Series(cs[0]).sort_values(ascending=0) rsi = rs.index[0] return convo_frame.iloc[rsi]['a'] get_response('Yes, I am clearly more clever than you will ever be!') This results in the following output: We have clearly created a monster, so we'll continue: get_response('You are a stupid machine. Why must I prove anything to you?') This results in the following output: I'm enjoying this. Let's keep rolling with it: get_response('My spirit animal is a menacing cat. What is yours?') To which I responded: get_response('I mean I didn't actually name it.') This results in the following output: Continuing: get_response('Do you have a name suggestion?') This results in the following output: To which I respond: get_response('I think it might be a bit aggressive for a kitten') This results in the following output: I attempt to calm the situation: get_response('No need to involve the police.') This results in the following output: And finally, get_response('And I you, Cleverbot') This results in the following output: Remarkably, this may be one of the best conversations I've had in a while: bot or no bot. Now that we have created this cake-based intelligence, let's set it up so that we can actually chat with it via text message. We'll need a few things to make this work. The first is a twilio account. They will give you a free account that lets you send and receive text messages. Go to http://ww.twilio.com and click to sign up for a free developer API key. You'll set up some login credentials and they will text your phone to confirm your number. Once this is set up, you'll be able to find the details in their Quickstart documentation. Make sure that you select Python from the drop-down menu in the upper left-hand corner. Sending messages from Python code is a breeze, but you will need to request a twilio number. This is the number that you will use to send a receive messages in your code. The receiving bit is a little more complicated because it requires that you to have a webserver running. The documentation is succinct, so you shouldn't have that hard a time getting it set up. You will need to paste a public-facing flask server's URL in under the area where you manage your twilio numbers. Just click on the number and it will bring you to the spot to paste in your URL: Once this is all set up, you will just need to make sure that you have your Flask web server up and running. I have condensed all the code here for you to use on your Flask app: from flask import Flask, request, redirect import twilio.twiml import pandas as pd import re from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity app = Flask( name ) PATH_TO_CSV = 'your/path/here.csv' df = pd.read_csv(PATH_TO_CSV) convo = df.iloc[:,0] clist = [] def qa_pairs(x): cpairs = re.findall(": (.*?)(?:$|n)", x) clist.extend(list(zip(cpairs, cpairs[1:]))) convo.map(qa_pairs); convo_frame = pd.Series(dict(clist)).to_frame().reset_index() convo_frame.columns = ['q', 'a'] vectorizer = TfidfVectorizer(ngram_range=(1,3)) vec = vectorizer.fit_transform(convo_frame['q']) @app.route("/", methods=['GET', 'POST']) def get_response(): input_str = request.values.get('Body') def get_response(q): my_q = vectorizer.transform([input_str]) cs = cosine_similarity(my_q, vec) rs = pd.Series(cs[0]).sort_values(ascending=0) rsi = rs.index[0] return convo_frame.iloc[rsi]['a'] resp = twilio.twiml.Response() if input_str: resp.message(get_response(input_str)) return str(resp) else: resp.message('Something bad happened here.') return str(resp) It looks like there is a lot going on, but essentially we use the same code that we used before, only now we grab the POST data that twilio sends—the text body specifically—rather than the data we hand-entered before into our get_requestfunction. If all goes as planned, you should have your very own weirdo bestie that you can text anytime, and what could be better than that! Summary In this article, we had a full tour of the chatbot landscape. It is clear that we are just on the cusp of an explosion of these sorts of applications. The Conversational UI revolution is just about to begin. Hopefully, this article has inspired you to create your own bot, but if not, at least perhaps you have a much richer understanding of how these applications work and how they will shape our future. I'll let the app say the final words: get_response("Say goodbye, Clevercake") Resources for Article: Further resources on this subject: Supervised Machine Learning [article] Unsupervised Learning [article] Specialized Machine Learning Topics [article]
Read more
  • 0
  • 0
  • 2997

article-image-define-necessary-connections
Packt
02 Dec 2016
5 min read
Save for later

Define the Necessary Connections

Packt
02 Dec 2016
5 min read
In this article by Robert van Mölken and Phil Wilkins, the author of the book Implementing Oracle Integration Cloud Service, where we will see creating connections which is one of the core components of an integration we can easily navigate to the Designer Portal and start creating connections. (For more resources related to this topic, see here.) On the home page, click the Create link of the Connection tile as given in the following screenshot: Because we click on this link the Connections page is loaded, which lists of all created connections, a modal dialogue automatically opens on top of the list. This pop-up shows all the adapter types we can create. For our first integration we define two technology adapter connections, an inbound SOAP connection and an outbound REST connection. Inbound SOAP connection In the pop-up we can scroll down the list and find the SOAP adapter, but the modal dialogue also includes a search field. Just search on SOAP and the list will show the adapters matching the search criteria: Find your adapter by searching on the name or change the appearance from card to list view to show more adapters at ones. Click Select to open the New Connection page. Before we can setup any adapter specific configurations every creation starts with choosing a name and an optional description: Create the connection with the following details: Connection Name FlightAirlinesSOAP_Ch2 Identifier This will be proposed based on the connection name and there is no need to change unless you'd like an alternate name. It is usually the name in all CAPITALS and without spaces and has a max length of 32 characters. Connection Role Trigger The role chosen restricts the connection to be used only in selected role(s). Description This receives in Airline objects as a SOAP service. Click the Create button to accept the details. This will bring us to the specific adapter configuration page where we can add and modify the necessary properties. The one thing all the adapters have in common is the optional Email Address under Connection Administration. This email address is used to send notification to when problems or changes occur in the connection. A SOAP connection consists of three sections; Connection Properties, Security, and an optional Agent Group. On the right side of each section we can find a button to configure its properties.Let's configure each section using the following steps: Click the Configure Connectivity button. Instead of entering in an URL we are uploading the WSDL file. Check the box in the Upload File column. Click the newly shown Upload button. Upload the file ICSBook-Ch2-FlightAirlines-Source WSDL. Click OK to save the properties. Click the Configure Credentials button. In the pop-up that is shown we can configure the security credentials. We have the choice for Basic authentication, Username Password Token, or No Security Policy. Because we use it for our inbound connection we don't have to configure this. Select No Security Policy from the dropdown list. This removes the username and password fields. Click OK to save the properties. We leave the Agent Group section untouched. We can attach an Agent Group if we want to use it as an outbound connection to an on-premises web service. Click Test to check if the connection is working (otherwise it can't be used). For SOAP and REST it simply pings the given domain to check the connectivity, but others for example the Oracle SaaS adapters also authenticate and collect metadata. Click the Save button at the top of the page to persist our changes. Click Exit Connection to return to the list from where we started. Outbound REST connection Now that the inbound connection is created we can create our REST adapter. Click the Create New Connection button to show the Create Connection pop-up again and select the REST adapter. Create the connection with the following details: Connection Name FlightAirlinesREST_Ch2 Identifier This will be proposed based on the connection name Connection Role Invoke Description This returns the Airline objects as a REST/JSON service Email Address Your email address to use to send notifications to Let’s configure the connection properties using the following steps: Click the Configure Connectivity button. Select REST API Base URL for the Connection Type. Enter the URL were your Apiary mock is running on: http://private-xxxx-yourapidomain.apiary-mock.com. Click OK to save the values. Next configure the security credentials using the following steps: Click the Configure Credentials button. Select No Security Policy for the Security Policy. This removes the username and password fields. Click the OK button to save out choice. Click Test at the top to check if the connection is working. Click the Save button at the top of the page to persist our changes. Click Exit Connection to return to the list from where we started. Troubleshooting If the test fails for one of these connections check if the correct WSDL is used or that the connection URL for the REST adapter exists or is reachable. Summary In this article we looked at the processes of creating and testing the necessary connections and the creation of the integration itself. We have seen an inbound SOAP connection and an outbound REST connection. In demonstrating the integration we have also seen how to use Apiary to document and mock our backend REST service. Resources for Article: Further resources on this subject: Getting Started with a Cloud-Only Scenario [article] Extending Oracle VM Management [article] Docker Hosts [article]
Read more
  • 0
  • 0
  • 1417
article-image-modelling-rpg-d
Ryan Roden-Corrent
02 Dec 2016
7 min read
Save for later

Modelling a RPG in D

Ryan Roden-Corrent
02 Dec 2016
7 min read
In this post, I'll show off some of the cool features of a language called D in the context of creating a game, specifically a RPG. Character Stats For our RPG, let's say there are three categories of stats on every character: Attributes: An int value for each of the classic six (Strength, Dexterity, and so on). Skills: An int value for each of several skills (diplomacy, stealth, and so on). Resistance: An int value for each 'type' (physical, fire, and so on) of damage. In D, we can represent such a character like so: struct Character { // attributes int strength; int dexterity; int constitution; int intellect; int wisdom; int charisma; // skills int stealth; int perception; int diplomacy; // resistances int resistPhysical; int resistFire; int resistWater; int resistAir; int resistEarth; } However, it would be nicer if we could have each category (attributes, skills, and resistances) represented as a single group of values. First, let's define some enums: enum Attribute { strength, dexterity, constitution, intellect, wisdom, charisma } enum Skill { stealth, perception, diplomacy } enum Element { physical, fire, water, air, earth } Now we want to map each of these enum members to a value for that particular attribute, skill, or resistance. One option is an associative array, which would look like this: struct Character { int[Attribute] attributes; int[Skill] attributes; int[Element] attributes; } int[Attribute] attributes declares that Character.attributes returns an int when indexed by an Attribute, like so: if (hero.attributes[Attribute.dexterity] < 4) hero.trip(); However, associative arrays are heap allocated and don't have a default value for each key. It seems like overkill for storing a small bundle of values. Another option is a static array. Static arrays are stack-allocated value types and will contain exactly the number of values that we need. struct Character { int[6] attributes; int[3] skills; int[5] resistances; } Our enum values are backed by ints, so we can use them directly as indexes just as we did with the associative array: if (hero.attributes[Attribute.intellect] > 12) hero.pontificate(); This is more efficient for our needs, but nothing enforces using enums as keys. If we accidentally gave an out-of-bounds index, the compiler wouldn't catch it and we'd get a runtime error. Ideally, we want the efficiency of the static array with the syntax of the associative array. Even better, it would be nice if we could say something like attributes.charisma instead of attributes[Attribute.charisma], like you would with a table in Lua. Fortunately, you can achieve this with only a few lines of D code. The Enumap import std.traits; /// Map each member of the enum `K` to a value of type `V` struct Enumap(K, V) { private enum N = EnumMembers!K.length; private V[N] _store; auto opIndex(K key) { return _store[key]; } auto opIndexAssign(T value, K key) { return _store[key] = value; } } Here's a line-by-line breakdown: import std.traits; We need access to std.traits.EnumMembers, a standard-library function that returns (at compile-time!) the members of an enum. struct Enumap(K, V) Here, we declare a templated struct. In many other languages, this would look like Enumap<K, V>. K will be our key type (the enum) and V will be the value. K and V are known as 'compile-time parameters'. In this case, they are simply used to create a generic type, but in D, such parameters can be used for much more than just generic types, as we will see later. private enum N = EnumMembers!K.length;` private V[N] _store;` Here we leverage EnumMembers to determine how many entries are in the provided enum. We use this to declare a static array capable of holding exactly NVs. 6: auto opIndex(K key) { return _store[key]; } 7: auto opIndexAssign(T value, K key) { return _store[key] = value; } opIndex is a special method that allows us to provide a custom implementation of the indexing ([]) operator. The call skills[Skill.stealth] is translated to sklls.opIndex(Skill.stealth), while the assignment skills[Skill.stealth] = 5 is translated to sklls.opIndexAssign(Skill.stealth, 5). Let's use that in our Character struct: struct Character { Enumap!(Attribute, int) attributes; Enumap!(Skill , int) skills; Enumap!(Element , int) resistances; } if (hero.attributes[Attribute.wisdom] < 2) hero.drink(unidentifiedPotion); There! Now the length of each underlying array is figured out for us, and the values can only be accessed using the enum members as keys. The underlying array _store is statically sized, so it requires no managed-memory allocation. Here's the really clever bit: import std.conv; //... struct Enumap(K, V) { //... auto opDispatch(string s)() { return this[s.to!K]; } auto opDispatch(string s)(V val) { return this[s.to!K] = val; } } if (hero.attributes.charisma < 5) hero.makeAwkwardJoke(); if (hero.attributes.charisma < 5) hero.makeAwkwardJoke(); opDispatch essentially overloads the . operator to provide some nice syntactic sugar. Here's a quick rundown of what happens for hero.attributes.charisma = 5: The compiler sees attributes.charisma. It looks for the charisma symbol in the type Enumap!(Attribute, int). Failing to find this, it tries attributes.opDispatch!"charisma". That call resolves to attributes["charisma".to!Attribute]. And further resolves to attributes[Attribute.charisma]. Remember I mentioned that compile-time arguments can be much more than types? Here is a compile-time string argument—in this case, its value is whatever symbol follows the.. Note that the above happens at compile time and is equivalent to using the indexing operator. So, we get the "charisma" string, but what we actually want is the enum member. Attribute.charisma. std.conv.to, makes quick work of this; it can, among other things, translate between strings and enum names. A Step Further – Enumap Arithmetic Let's suppose we add items to the game, and each item can provide some stat bonuses: struct Item { Enumap!(Attribute, int) bonuses; } It would be really nice if we could just add these bonuses to our character's base stats, like so: auto totalStats = character.attributes + item.bonuses; Yet again, D lets us implement this quite concisely, this time by leveraging opBinary. struct Enumap(K, V) { //... auto opBinary(string op)(typeof(this) other) { V[N] result = mixin("_store[] " ~ op ~ " other._store[]"); return typeof(this)(result); } Breakdown time again! auto opBinary(string op)(typeof(this) other) An expression like enumap1 + enumap2 will get translated (at compile time!) to enumap1.opBinary!"+"(enumap2). The operator (in this case, +`) is passed as a compile-time string argument. If passing the operator as a string sounds weird, read on… V[N] result = mixin("_store[]" ~ op ~ "other._store[]"); mixin is a D keyword that translates a compile-time string into code. Continuing with our + example, we end up with V[N] result = mixin("_store[]" ~ "+" ~ "other._store[]"), which simplifies to V[N] result = _store[] + other._store[]). The _store[] + other._store[] expression is called an "array-wise operation". It's a concise way of performing an operation between corresponding elements of two arrays, in this case, adding each pair of integers into a resulting array. return typeof(this)(result); Here we wrap the resulting array in an Enumap before returning it. typeof(this) resolves to the enclosing type. It is equivalent, but preferable, to Enumap!(K, V), as if we change the name of the class we won't have to refactor this line. In many languages, we'd have to separately define opAdd, opSub, opMult, and more, most of which would likely contain similar code. However, thanks to the way opBinary allows us to work with a string representation of the operator at compile time, our single opBinary implementation supports operators like - and * as well. Summary I hope you enjoyed learning a little about D! There is a full implementation of Enumap available here: https://github.com/rcorre/enumap. About the Author Ryan Roden-Corrent is a software developer by trade and hobby. He is an active contributor to the free/open source software community and has a passion for simple but effective tools. He started gaming at a young age and dabbles in all aspects of game development, from coding to art and music. He's also an aspiring musician and yoga teacher. You can find his open source work here  and Creative Commons art here.
Read more
  • 0
  • 0
  • 3581

article-image-administering-swarm-cluster
Packt
02 Dec 2016
12 min read
Save for later

Administering a Swarm Cluster

Packt
02 Dec 2016
12 min read
In this article by Fabrizio Soppelsa and Chanwit Kaewkasi, the author of Native Docker Clustering with Swarm we're now going to see how to administer a running Swarm cluster. The topics include scaling the cluster size (adding and removing nodes), updating the cluster and nodes information; handling the node status (promotion and demotion), troubleshooting, and graphical interfaces (UI). (For more resources related to this topic, see here.) Docker Swarm standalone In standalone mode, cluster operations must be done directly inside the container 'swarm'. We're not going to cover every option in detail. Swarm standalone is not deprecated yet, and is used around, the reason for which we're discussing it here, but it will be probably declared deprecated soon. It is obsoleted by the Swarm mode. The commands to administer a Docker Swarm standalone cluster are: Create (c): Typically, in production people use Consul or Etcd, so this command has no relevance for production List (l): This shows the list of cluster nodes, basing on a iteration through Consul or Etcd, that is, the Consul or Etcd must be passed as an argument Join (j): This joins a node on which the swarm container is running to the cluster. Here, still, a discovery mechanism must be passed at the command line Manage (m): This is the core of the Standalone mode. By managing a cluster, here it's meant how to change some cluster properties, such as Filters, Schedulers, external CA URLs, and timeouts. Docker Swarm mode: Scale a cluster size Manually adding nodes You can choose to create Docker hosts either way you prefer. If you plan to use Docker Machine, you're probably going to hit Machine's limits very soon, and you will need to be very patient while even listing machines, having to wait several seconds for Machine to get and print all the information on the whole. My favorite method is to use Machine with the generic driver, thus delegate to something else (that is, Ansible) the host provisioning (Operating System installation, network and security groups configurations, and so on), and later exploit Machine to install Docker the proper way: Manually configure the cloud environment (security groups, networks, and so on) Provision Ubuntu hosts with a third-party tool Run Machine with the generic driver on these hosts with the only goal to properly install Docker Then handle hosts with the tool in part 2, or even others. If you use Machine's generic driver, it will select the latest stable Docker binaries. While we were writing this article, in order to use Docker 1.12, we had to overcome this by passing Machine a special option to get the latest, unstable, version of Docker: docker-machine create -d DRIVER--engine-install-url https://test.docker.com mymachine For a production Swarm (mode), at the time you'll be reading this article, 1.12 will be already stable, so this trick will not be necessary anymore, unless you need to use some of the very latest Docker features. Managers The theory of HA suggests us that the number of managers must be odd, and equal or more than 3. This is to grant a quorum in high availability, that is the majority of nodes agree on what part of nodes are leading the operations. If there were two managers, and one goes down and comes back, it's possible that both will think to be the leaders. That causes a logical crash in the cluster organization called split brain. The more managers you have, the higher is the resistance ratio to failures. Refer to the following table: Number of managers Quorum (majority) Maximum possible failures 3 2 1 5 3 2 7 4 3 9 5 4 Also, in Swarm Mode, an overlay network is created automatically and associated as ingress traffic to the nodes. Its purpose is to be used with containers: You will want that your containers be associated to an internal overlay (VxLAN meshed) network to communicate with each other, rather than using public or other networks. Thus, Swarm creates this already for you, ready to use. We recommend, further, to geographically distribute managers. If an earthquake hits the datacenter where all managers are serving, the cluster would go down, wouldn't it? So, consider to place each manager or groups of managers into different physical locations. With the advent of cloud computing, that's really easy, you can spawn up each manager in a different AWS region, or even better have a manager running each on different providers on different regions, that is on AWS, on Digital Ocean, on Azure and also on private cloud, such as OpenStack. IMAGE OF A WORLD WITH SCATTERED MANAGERS IN CONTINENTS? Workers You can add an arbitrary number of workers. This is the elastic part of the Swarm. It's totally fine to have 5, 15, 200, or 2,300 running workers. This is the easiest part to handle: You can add and remove workers with no burdens, at any time, at any size. Scripted nodes addition The very easiest way to add nodes, if you plan to not go over 100 nodes total, is to use basic scripting. At the time of docker swarm init, just copy and paste the lines printed in the output. Then, create a certain bunch of workers: #!/bin/bash for i in `seq 0 9`; do docker-machine create -d amazonec2 --engine-install-url https://test.docker.com --amazonec2-instance-type "t2.large" swarm-worker-$i done After that, it will be only necessary to go through the list of machines, ssh into them, and join the nodes: #!/bin/bash SWARMWORKER="swarm-worker-" for machine in `docker-machine ls --format {{.Name}} | grep $SWARMWORKER`; do docker-machine ssh $machine sudo docker swarm join --token SWMTKN-1-5c3mlb7rqytm0nk795th0z0eocmcmt7i743ybsffad5e04yvxt-9m54q8xx8m1wa1g68im8srcme 172.31.10.250:2377 done This script runs through the machines, and for each with a name starting with swarm-worker-, will ssh into, and join the node to the existing Swarm, to the leader manager, here 172.31.10.250. Refer to https://github.com/swarm2k/swarm2k/tree/master/amazonec2 for some further details or to download these one liners. Belt Belt is another tool for massively provisioning Docker Engines. It is basically a SSH wrapper on steroids and it requires you to prepare provider-specific images as well as provisioning templates before go massively. In this section, we'll learn to do so: You can compile Belt yourself by getting its source from Github. # Set $GOPATH here go get https://github.com/chanwit/belt Currently, Belt supports the DigitalOcean driver. We can prepare our template for provisioning such as the following inside config.yml: digitalocean: image: "docker-1.12-rc4" region: nyc3 ssh_key_fingerprint: "your SSH ID" ssh_user: root Then we can create a hundred nodes basically with a couple of commands. First we create three boxes of 16 GB, namely, mg0, mg1, and mg2. $ belt create 16gb mg[0:2] NAME IPv4 MEMORY REGION IMAGE STATUS mg2 104.236.231.136 16384 nyc3 Ubuntu docker-1.12-rc4 active mg1 45.55.136.207 16384 nyc3 Ubuntu docker-1.12-rc4 active mg0 45.55.145.205 16384 nyc3 Ubuntu docker-1.12-rc4 active Then we can use the status command to wait for all nodes to become active: $ belt status --wait active=3 STATUS #NODES NAMES active 3 mg2, mg1, mg0 We'll do this again for 10 worker nodes. $ belt create 512mb node[1:10] $ belt status --wait active=13 STATUS #NODES NAMES active 3 node10, node9, node8, node7 Use Ansible You can alternatively use Ansible (as you like, and it's becoming very popular) to make things more repeatable. I (Fabrizio) created some Ansible modules to work directly with Machine and Swarm (Mode), compatible with Docker 1.12 (https://github.com/fsoppelsa/ansible-swarm). They require Ansible 2.2+, the very first version of Ansible compatible with binary modules. You will need to compile the modules (written in Go), and then pass them to the ansible-playbook -M parameter. git clone https://github.com/fsoppelsa/ansible-swarm cd ansible-swarm/library go build docker_machine_ go build docker_swarm_ cd .. There are some examples of plays in playbooks/. Ansible's plays syntax is that easy to understand, that's even superfluous to explain in detail. I used this play to join 10 workers to the Swarm2k experiment: --- name: Join the Swarm2k project hosts: localhost connection: local gather_facts: False #mg0 104.236.18.183 #mg1 104.236.78.154 #mg2 104.236.87.10 tasks: name: Load shell variables shell: > eval $(docker-machine env "{{ machine_name }}") echo $DOCKER_TLS_VERIFY && echo $DOCKER_HOST && echo $DOCKER_CERT_PATH && echo $DOCKER_MACHINE_NAME register: worker name: Set facts set_fact: whost: "{{ worker.stdout_lines[0] }}" wcert: "{{ worker.stdout_lines[1] }}" name: Join a worker to Swarm2k docker_swarm: role: "worker" operation: "join" join_url: ["tcp://104.236.78.154:2377"] secret: "d0cker_swarm_2k" docker_url: "{{ whost }}" tls_path: "{{ wcert }}" register: swarm_result name: Print final msg debug: msg="{{ swarm_result.msg }}" Basically, after loading some host facts from Machine, it invokes the docker_swarm module: The operation is join. The role of the new node is worker. The new node joins "tcp://104.236.78.154:2377", that was the leader manager at the time of joining. This argument takes an array of managers, such as might be ["tcp://104.236.78.154:2377", "104.236.18.183:2377", "tcp://104.236.87.10:2377"]. It passes the password (secret). It specifies some basic Engine connection facts. The module will connect to the dockerurl using the certificates at tlspath. After having docker_swarm.go compiled in library/, adding workers to the swarm is as easy as: #!/bin/bash SWARMWORKER="swarm-worker-" for machine in `docker-machine ls --format {{.Name}} | grep $SWARMWORKER`; do ansible-playbook -M library --extra-vars "{machine_name: $machine}" playbook.yaml don Cluster management We now operate a little bit with this example, made of 3 managers and 10 workers. You can reference the nodes by calling them either by their hostname (manager1) or by their ID (ctv03nq6cjmbkc4v1tc644fsi). The other columns in this list statement describe the properties of the cluster nodes. STATUS: This is about the physical reachability of the node. If the node is up, it's Ready, otherwise Down AVAILABILITY: This is the node availability. A node can be either Active (so participating to the cluster operations), Pause (in standby, suspended, not accepting tasks), or Drain (waiting to evacuate its tasks). MANAGER STATUS: This is about the current status of manager. If a node is not a manager, this field will be empty. If a node is a manager, this field can be either Reachable (one of the managers presents to guarantee high availability) or Leader (the host leading all operations). Nodes operations The docker node command comes with some possible options. Demotion and promotion Promotion is possible for worker nodes (transforming them into managers), while demotion is possible for manager nodes (transforming them into workers). Always keep in mind the table to guarantee high availability when managing the number of managers and workers (odd number, more than or equal to 3). Use the following syntax to promote worker0 and worker1 to managers: docker node promote worker0 docker node promote worker1 There is nothing magic behind the curtain. It is just that Swarm attempts to change the node role, with an on-the-fly instruction. Demote is the same (docker node demote worker1). But be careful to not demote the node you're working from, otherwise you'll get locked out. What happens if you try to demote a Leader manager? In this case, the RAFT algorithm will start an election and a new leader will be selected among the Active managers. Tagging nodes You must have noticed, in the preceding screenshot, that worker9 is in Drain availability. This means that the node is in the process of evacuating its tasks (if any), which will be rescheduled somewhere else on the cluster. You can change the availability of a node by updating its status using the docker node update command: The --availability option can take either active, pause, or drain. Here we just restored worker9 to the active state. Active: This means that the node is running and ready to accept tasks pause: This means that the node is running, but not accepting tasks drain: This means that the node is running and not accepting tasks, it is currently draining its tasks, that are getting rescheduled somewhere else Another powerful update argument is about labels. There are --label-add and --label-rm that respectively allow us to add labels to Swarm nodes. Docker Swarm labels do not affect the Engine labels. It's possible to specify labels when starting the Docker Engine (dockerd [...] --label "staging" --label "dev" [...]). But Swarm has no power to edit/change them. The labels we see here only affect the Swarm behavior. Labels are useful to categorize nodes. When you start services, you can then filter and decide where to physically spawn containers, using labels. For instance, if you want to dedicate a bunch of nodes with SSD to host MySQL, you can actually do this: docker node update --label-add type=ssd --label-add type=mysql worker1 docker node update --label-add type=ssd --label-add type=mysql worker2 docker node update --label-add type=ssd --label-add type=mysql worker3 Later, when you will start a service with some replica factor, say 3, you'll be sure that it will start MySQL containers exactly on worker1, worker2, and worker3, if you filter by node.type: docker service create --replicas 3 --constraint 'node.type == mysql' --name mysql-service mysql:5.5. Summary In this article, we went through the typical Swarm administration procedures and options. After showing how to add managers and workers to the cluster, we explained in detail how to update cluster and node properties, how to check the Swarm health, and we encountered Shipyard as a UI. After this focus on infrastructure, now it's time to use our Swarms. Resources for Article: Further resources on this subject: Hands On with Docker Swarm [article] Setting up a Project Atomic host [article] Getting Started with Flocker [article]
Read more
  • 0
  • 0
  • 5649

article-image-sales-and-purchase-process
Packt
02 Dec 2016
21 min read
Save for later

The Sales and Purchase Process

Packt
02 Dec 2016
21 min read
In this article by Anju Bala, the author of the book Microsoft Dynamics NAV 2016 Financial Management - Second Edition, we will see the sales and purchase process using Microsoft Dynamics NAV 2016 in detail. Sales and purchases are two essential business areas in all companies. In many organizations, the salesperson or the purchase department are the ones responsible for generating quotes and orders. People from the finance area are the ones in charge of finalizing the sales and purchase processes by issuing the documents that have an accountant reflection: invoices and credit memos. In the past, most systems required someone to translate all the transactions to accountancy language, so they needed a financer to do the job. In Dynamics NAV, anyone can issue an invoice, with zero accountant knowledge needed. But a lot of companies keep their old division of labor between departments. This is why we have decided to explain the sales and purchase processes in this book. This article explains how their workflows are managed in Dynamics NAV. In this article you will learn: What is Dynamics NAV and what it can offer to your company To define the master data needed to sell and purchase How to set up your pricing policies (For more resources related to this topic, see here.) Introducing Microsoft Dynamics NAV Dynamics NAV is an Enterprise Resource Planning (ERP) system targeted at small and medium-sized companies. An ERP is a system, a software, which integrates the internal and external management information across an entire organization. The purpose of an ERP is to facilitate the flow of information between all business functions inside the boundaries of the organizations. An ERP system is meant to handle all the organization areas on a single software system. This way the output of an area can be used as an input of another area. Dynamics NAV 2016 covers the following functional areas: Financial Management: It includes accounting, G/L budgets, account schedules, financial reporting, cash management, receivables and payables, fixed assets, VAT reporting, intercompany transactions, cost accounting, consolidation, multicurrency, and Intrastat. Sales and Marketing: This area covers customers, order processing, pricing, contacts, marketing campaigns, and so on. Purchase: The purchase area includes vendors, order processing, approvals, planning, costing, and other such areas. Warehouse: Under the warehouse area you will find inventory, shipping and receiving, locations, picking, assembly, and so on. Manufacturing: This area includes product design, capacities, planning, execution, costing, subcontracting, and so on. Job: Within the job area you can create projects, phases and tasks, planning, time sheets, work in process, and other such areas. Resource Planning: Manage resources, capacity, and so on. Service: Within this area you can manage service items, contracts, order processing, planning and dispatching, service tasks, and so on. Human Resources: Manage employees, absences, and so on. Some of these areas will be covered in detail in this book. Dynamics NAV offers much more than robust financial and business management functionalities. It is also a perfect platform to customize the solution to truly fit your company needs. If you have studied different ERP solutions, you know by now customizations to fit your specific needs will always be necessary. Dynamics NAV has a reputation as being easy to customize, which is a distinct advantage. Since you will probably have customizations in your system, you might find some differences with what is explained in this book. Your customizations could imply that: You have more functionality in your implementation Some steps are automated, so some manual work can be avoided Some features behave different than explained here There are new functional areas in your Dynamics NAV In addition Dynamics NAV has around forty different country localizations that are meant to cover country-specific legal requirements or common practices. Many people and companies have already developed solutions on top of Dynamics NAV to cover horizontal or industry-specific needs, and they have registered their solution as an add-on, such as: Solutions for the retail industry or the food and beverages industry Electronic Data Interchange (EDI) Quality or Maintenance management Integration with third-party applications such as electronic shops, data warehouse solutions, or CRM systems Those are just a few examples. You can find almost 2,000 registered third-party solutions that cover all kinds of functional areas. If you feel that Dynamics NAV does not cover your needs and you will need too much customization, the best solution will probably be to look for an existing add-on and implement it along with your Dynamics NAV. Anyway, with or without an add-on, we said that you will probably need customizations. How many customizations can you expect? This is hard to tell as each case is particular, but we'll try to give you some highlights. If your ERP system covers a 100 percent of your needs without any customization, you should worry. This means that your procedures are so standard that there is no difference between you and your competence. You are not offering any special service to your customer, so they are only going to measure you by the price they are getting. On the other hand, if your Dynamics NAV only covers a low percentage of your needs it could just mean two things: this is not the product you need; or your organization is too chaotic and you should re-think your processes to standardize them a bit. Some people agree that the ideal scenario would be to get about 70-80 percent of your needs covered out of the box, and about 20-30 percent customizations to cover those needs that make you different from your competitors. Importance of Financial Management In order to use Dynamics NAV, all organizations have to use the Financial Management area. It is the epicenter of the whole application. Any other area is optional and their usage depends on the organization's needs. The sales and the purchase areas are also used in almost any Dynamics NAV implementation. Actually, accountancy is the epicenter, and the general ledger is included inside the Financial Management area. In Dynamics NAV everything leads to accounting. It makes sense as accountancy is the act of recording, classifying, and summarizing, in terms of money, the transactions and events that take place in the company. Every time the warehouse guy ships an item, or the payment department orders a transfer, these actions can be written in terms of money using accounts, credit, and debit amounts. An accountant could collect all the company transactions and translate them one by one to accountancy language. But this means manual duplicate work, a lot of chances of getting errors and inconsistencies, and no real-time data. On the other hand, Dynamics NAV is capable to interpret such transactions and translate them to accountancy on the fly. In Dynamics NAV everything leads to accountancy, so all the company's employees are helping the financial department with their job. The financers can now focus on analyzing the data and taking decisions, and they don't have to bother on entering the data anymore. Posted data cannot be modified (or deleted) One of the first things you will face when working with Dynamics NAV is the inability to modify what has been posted, whether it's a sales invoice, a shipment document, a general ledger entry, or any other data. Any posted document or entry is unchangeable. This might cause frustration, especially if you are used to work with other systems that allow you to modify data. However, this feature is a great advantage since it ensures data integrity. You will never find an unbalanced transaction. If you need to correct any data, the Dynamics NAV approach is to post new entries to null the incorrect ones, and then post the good entries again. For instance, if you have posted and invoice, and the prices were wrong, you will have to post a credit memo to null the original invoice and then issue a new invoice with the correct prices. Document No. Amount Invoice 01 1000 Credit Memo 01 -1000 This nulls the original invoice Invoice 02 800 As you can see this method for correcting mistakes always leaves a track of what was wrong and how we solved it. Users get the feeling that they have to perform too many steps to correct the data; with the addition that everyone can see that there was a mistake at some point. Our experience tells us that users tend to pay more attention before they post anything in Dynamics NAV, which leads to make fewer mistakes on the first place. So another great advantage of using Dynamics NAV as your ERP system is that the whole organization tends to improve their internal procedures, so no mistakes. No save button Dynamics NAV does not have any kind of save button anywhere in the application. Data is saved into the database while it is being introduced. When you enter data in one field, right after you leave the field, the data is already saved. There is no undo feature. The major advantage is that you can create any card (for instance, Customer Card), any document (for instance, Sales Order), or any other kind of data without knowing all the information that is needed. Imagine you need to create a new customer. You have all their fiscal data except their VAT Number. You could create the card, fill in all the information except the VAT Registration No. field, and leave the card without losing the rest of the information. When you have figured out the VAT Number of your customer, you can come back and fill it in. The not-losing-the-rest-of-the-information part is important. Imagine that there actually was a Save button; you spend a few minutes filling in all the information and, at the end, click on Save. At that moment, the system carries out some checks and finds out that one field is missing. It throws you a message saying that the Customer Card cannot be saved. So you basically have two options: To lose the information introduced, find out the VAT number for the customer, and start all over again. To cheat. Fill the field with some wrong value so that the system actually lets you save the data. Of course, you can come back to the card and change the data once you've found out the right one. But nothing will prevent any other user to post a transaction with the customer in the meantime. Understanding master data Master data is all the key information to the operation of a business. Third-party companies, such as customers and vendors, are part of the master data. The items a company manufactures or sells are also part of the master data. Many other things can be considered master data, such as the warehouses or locations, the resources, or the employees. The first thing you have to do when you start using Dynamics NAV is loading your master data into the system. Later on, you will keep growing your master data by adding new customers, for instance. To do so, you need to know which kind of information you have to provide. Customers We will open a Customer Card to see which kind of information is stored in Dynamics NAV about customers. To open a Customer Card, follow these steps: Navigate to Departments/Sales & Marketing/Sales/Customers. You will see a list of customers, find No. 10000 The Cannon Group PLC. Double-click on it to open its card, or select it and click on the View icon found on the Home tab of the ribbon. The following screenshot shows the Customer Card for The Cannon Group PLC: Customers are always referred to by their No., which is a code that identifies them. We can also provide the following information: Name, Address, and Contact: A Search Name can also be provided if you refer to your customer by its commercial name rather than by its fiscal name. Invoicing information: It includes posting groups, price and discount rates, and so on. You may still don't know what a posting group is, since it is the first time those words are mentioned on this book. At this moment, we can only tell you that posting groups are important. But it's not time to go through them yet. We will talk about posting groups in Chapter 6, Financial Management Setup. Payments information: It includes when and how will we receive payments from the customer. Shipping information: It explains how do we ship items to the customer. Besides the information you see on the card, there is much other information we can introduce about customers. Take a look at the Navigate tab found on the ribbon. Other information that can be entered is as follows: Information about bank accounts: so that we can know where can we request the payments. Multiple bank accounts can be setup for each customer. Credit card information: in case customers pay using this procedure. Prepayment information: in case you require your customers to pay in advance, either totally or partially. Additional addresses: where goods can be shipped (Ship-to Addresses). Contacts: You may relate to different departments or individuals from your customers. Relations: between our items and the customer's items (Cross References). Prices and Discounts: which will be discussed in the Pricing section. But customers, just as any other master data record, do not only have information that users inform manually. They have a bunch of other information that is filled in automatically by the system as actions are performed: History: You can see it on the right side of the card and it holds information such as how many quotes or orders are currently being processed or how many invoices and credit memos have been issued. Entries: You can access the ledger entries of a customer through the Navigate tab. They hold the details of every single monetary transaction done (invoices, credit memos, payments, and so on). Statistics: You can see them on the right side and they hold monetary information such as the amount in orders or what is the amount of goods or services that have been shipped but not yet invoiced. The Balance: It is a sum of all invoices issued to the customer minus all payments received from the customer. Not all the information we have seen on the Customer Card is mandatory. Actually, the only information that is required if you want to create a transaction is to give it a No. (its identification) and to fill in the posting group's fields (Gen. Bus. Posting Group and Customer Posting Group). All other information can be understood as default information and setup that will be used in transactions so that you don't have to write it down every single time. You don't want to write the customer's address in every single order or invoice, do you? Items Let's take a look now at an Item Card to see which kind of information is stored in Dynamics NAV about items. To open an Item Card, follow these steps: Navigate to Departments/Sales & Marketing/Inventory & Pricing/Items. You will see a list of items, find item 1000 Bicycle. Double-click on it to open its card. The following screenshot shows the item card for item 1000 Bicycle: As you can see in the screenshot, items first have a No., which is a code that identifies them. For an item, we can enter the following information: Description: It's the item's description. A Search Description can also be provided if you better identify an item using a different name. Base Unit of Measure: It is the unit of measure in which most quantities and other information such as Unit Price for the item will be expressed. We will see later what other units of measure can be used as well, but the Base is the most important one and should be the smallest measure in which the item can be referred. Classification: Item Category Code and Product Group Code fields offer a hierarchical classification to group items. The classification can fill in the invoicing information we will see in the next point. Invoicing information: It includes posting groups, costing method used for the item, and so on. Posting groups are explained in Chapter 6, Financial Management Setup, and costing methods are explained in Chapter 3, Accounting Processes. Pricing information: It is the item's unit price and other pricing configuration that we will cover in more detail in the Pricing section. Foreign trade information: It is needed if you have to do Instrastat reporting. Replenishment, planning, item tracking, and warehouse information: These fast-tabs are not explained in detail because they are out of the scope of this book. They are used to determine how to store the stock and how to replenish it. Besides the information you see on the Item Card, there is much other information we can introduce about items through the Navigate tab found on the ribbon: As you can see, other information that can be entered is as follows: Units of Measure: It is useful when you can sell your item either in units, boxes, or other units of measure at the same time. Variants: It is useful when you have multiple items that are actually the same one (thus, they share most of the information) but with some slight differences. You can use variants to differentiate colors, sizes, or any other small difference you can think of. Extended texts: It is useful when you need long descriptions or technical info to be shown on documents. Translations: It is used so that you can show item's descriptions on other languages, depending on the language used by your customers. Prices and discounts: It will be discussed in the Pricing section. As with customers, not all the information in the Item Card is mandatory. Vendors, resources, and locations We will start with third-parties; customers and vendors. They work exactly the same way. We will just look at customers, but everything we will explain about them can be applied to vendors as well. Then, we will look at items, and finally, we will take a brief look to locations and resources. The concepts learned can be used in resources and locations, and also to other master data such as G/L accounts, fixed assets, employees, service items, and so on. Pricing Pricing is the combination of prices for items and resources and the discounts that can be applied to individual document lines or to the whole document. Prices can be defined for items and resources and can be assigned to customers. Discounts can be defined for items and documents and can also be assigned to customers. Both prices and discounts can be defined at different levels and can cover multiple pricing policies. The following diagram illustrates different pricing policies that can be established in Dynamics NAV: Defining sales prices Sales prices can be defined in different levels to target different pricing policies. The easiest scenario is when we have a single price per item or resource. That is, the One single price for everyone policy. In that case, the sales price can be specified on the Item Card or on the Resource Card, in a field called Unit Price. In a more complex scenario, where prices depend on different conditions, we will have to define the possible combinations and the resulting price. We will explain how prices can be configured for items. Prices for resources can be defined in a similar way, although they offer fewer possibilities. To define sales prices for an Item, follow these steps: Navigate to Departments/Sales & Marketing/Inventory & Pricing/Items. You will see a list of items, find item 1936-S BERLIN Guest Chair, yellow. Double-click on it to open its card. On the Navigate tab, click on the Prices icon found under the Sales group. The Edit – Sales Prices page will open. As you can see in the screenshot, multiple prices have been defined for the same item. A specific price will only be used when all the conditions are met. For example, a Unit Price will be used for any customer that buys item 1936-S after the 20/01/2017 but only if they buy a minimum of 11 units. Different fields can be used to address each of the pricing policies: The combination of Sales Type and Sales Code fields enable the different prices for different customers policy Fields Unit of Measure Code and Minimum Quantity are used on the different prices per volume policy Fields Starting Date, Ending Date, and Currency Code are used on the different prices per period or currency policy They can all be used at the same time to enable mixed policies. When multiple pricing conditions are met, the price that is used is the one that is most favorable to the customer (the cheapest one). Imagine Customer 10000 belongs to the RETAIL price group. On 20/01/2017 he buys 20 units of item 1936-S. There are three different prices that could be used: the one defined for him, the one defined for its price group, and the one defined to all customers when they buy at least 11 units. Among the three prices, 130.20 is the cheapest one, so this is the one that will be used. Prices can be defined including or excluding VAT. Defining sales discounts Sales discounts can be defined in different levels to target different pricing policies. We can also define item discounts based on conditions. This addresses the Discounts based on items policy and also the Discounts per volume, period or currency policy, depending on which fields are used to establish the conditions. In the following screenshot, we can see some examples of item discounts based on conditions, which are called Line Discounts because they will be applied to individual document lines. In some cases, items or customers may already have a very low profit for the company and we may want to prevent the usage of line discounts, even if the conditions are met. A field called Allow Line Disc, can be found on the Customer Card and on sales prices. By unchecking it, we will prevent line discounts to be applied to a certain customer or when a specific sales price is used. Besides the line discounts, invoice discounts can be defined to use the General discounts per customer policy. Invoice discounts apply to the whole document and they depend only on the customer. Follow these steps to see and define invoice discounts for a specific customer: Open the Customer Card for customer 10000, The Cannon Group PLC. On the Navigate tab, click on Invoice Discounts. The following screenshot shows that customer 10000 has an invoice discount of 5 percent: Just as line discounts, invoice discounts can also be disabled using a field called Allow Invoice disc. that can be found on the Item Card and on sales prices. There is a third kind of discount, payment discount, which can be defined to use the Financial discounts per early payments policy. This kind of discount applies to the whole document and depends on when the payment is done. Payment discounts are bound to a Payment Term and are to be applied if the payment is received within a specific number of days. The following screenshot shows the Payment Terms that can be found by navigating to Departments/Sales & Marketing/Administration/Payment Terms: As you can see, a 2 percent payment discount has been established when the 1M(8D) Payment Term is used and the payment is received within the first eight days. Purchase pricing Purchase prices and discounts can also be defined in Dynamics NAV. The way they are defined is exactly the same as you can define sales prices and discounts. There are some slight differences: When defining single purchase pricing on the Item Card, instead of using the Unit Price field, we will use the Last Direct Cost field. This field gets automatically updated as purchase invoices are posted. Purchase prices and discounts can only be defined per single vendors and not per group of vendors as we could do in sales prices and discounts. Purchase discounts can only be defined per single items and not per group of items as we could do in sales discounts. We cannot prevent purchase discounts to be applied. Purchase prices can only be defined excluding VAT. Summary In this chapter, we have learned that Dynamics NAV as an ERP system meant to handle all the organization areas on a single software system. The sales and purchases processes can be held by anyone without the need of having accountancy knowledge, because the system is capable of translating all the transactions to accountant language on the fly. Customers, vendors, and items are the master data of these areas. Its information is used in documents to post transactions. There are multiple options to define your pricing policy: from one single price to everyone to different prices and discounts per groups of customers, per volume, or per period or currency. You can also define financial discounts per early payment. In the next chapter, we will learn how to manage cash by showing how to handle receivables, payables, and bank accounts. Resources for Article: Further resources on this subject: Modifying the System using Microsoft Dynamics Nav 2009: Part 3 [article] Introducing Dynamics CRM [article] Features of Dynamics GP [article]
Read more
  • 0
  • 0
  • 2034
article-image-introduction-functional-programming
Packt
01 Dec 2016
19 min read
Save for later

Introduction to the Functional Programming

Packt
01 Dec 2016
19 min read
In this article by Wisnu Anggoro, the author of the book, Functional C#, we are going to explore the functional programming by testing it. We will use the power of C# to construct some functional code. We will also deal with the features in C#, which are mostly used in developing functional programs. By the end of this chapter, we will have an idea how the functional approach in C# will be like. Here are the topics we will cover in this chapter: Introduction to functional programming concept Comparing between the functional and imperative approach The concepts of functional programming The advantages and disadvantages of functional programming (For more resources related to this topic, see here.) In functional programming, we write functions without side effects the way we write in Mathematics. The variable in the code function represents the value of the function parameter, and it is similar to the mathematical function. The idea is that a programmer defines the functions that contain the expression, definition, and the parameters that can be expressed by a variable in order to solve problems. After a programmer builds the function and sends the function to the computer, it's the computer's turn to do its job. In general, the role of the computer is to evaluate the expression in the function and return the result. We can imagine that the computer acts like a calculator since it will analyze the expression from the function and yield the result to the user in a printed format. The calculator will evaluate a function which are composed of variables passed as parameters and expressions which forms the body of the function. Variables are substituted by its value in the expression. We can give simple expression and compound expressions using Algebraic operators. Since expression without assignments never alter the value, sub expressions needs to be evaluated only once. Suppose we have the expression 3 + 5 inside a function. The computer will definitely return 8 as the result right after it completely evaluates it. However, this is just a simple example of how the computer acts in evaluating an expression. In fact, a programmer can increase the ability of the computer by creating a complex definition and expression inside the function. Not only can the computer evaluate the simple expression, but it can also evaluate the complex calculation and expression. Understanding definitions, scripts, and sessions As we discuss earlier about  a calculator that will analyze the expression from the function, let's imagine we have a calculator that has a console panel like a computer does. The difference between that and a conventional calculator is that we have to press Enter instead of = (equal to) in order to run the evaluation process of the expression. Here, we can type the expression and then press Enter. Now, imagine that we type the following expression: 3 x 9 Immediately after pressing Enter, the computer will print 27 in the console, and that's what we are expecting. The computer has done a great job of evaluating the expression we gave. Now, let's move to analyzing the following definitions. Imagine that we type them on our functional calculator: square a = a * a max a b = a, if a ≥ b = b, if b > a We have defined the two definitions, square and max. We can call the list of definitions script. By calling the square function followed by any number representing variable a, we will be given the square of that number. Also, in the max definition, we serve two numbers to represent variable a and b, and then the computer will evaluate this expression to find out the biggest number between the variables. By defining these two definitions, we can use them as a function, which we can call session, as follows: square (1 + 2) The computer will definitely print 9 after evaluating the preceding function. The computer will also be able to evaluate the following function: max 1 2 It will return 2 as the result based on the definition we defined earlier. This is also possible if we provide the following expression: square (max 2 5) Then, 25 will be displayed in our calculator console panel. We can also modify a definition using the previous definition. Suppose we want to quadruple an integer number and take advantage of the definition of the square function; here is what we can send to our calculator: quad q = square q * square a quad 10 The first line of the preceding expression is a definition of the quad function. In the second line, we call that function, and we will be provided with 10000 as the result. The script can define the variable value; for instance, take a look at the following: radius = 20 So, we should expect the computer to be able to evaluate the following definition: area = (22 / 7) * square (radius) Understanding the functions for functional programming Functional programming uses a technique of emphasizing functions and their application instead of commands and their execution. Most values in functional programming are function values. Let's take a look at the following mathematical notation: f :: A -> B From the preceding notation, we can say that function f is a relation of each element stated there, which is A and B. We call A, the source type, and B, the target type. In other words, the notation of A à B states that A is an argument where we have to input the value, and B is a return value or the output of the function evaluation. Consider that x denotes an element of A and x + 2 denotes an element of B, so we can create the mathematical notation as follows: f(x) = x + 2 In mathematics, we use f(x) to denote a functional application. In functional programming, the function will be passed an argument and will return the result after the evaluation of the expression. We can construct many definitions for one and the same function. The following two definitions are similar and will triple the input passed as an argument: triple y = y + y + y triple' y = 3 * y As we can see, triple and triple' have different expressions. However, they are the same functions, so we can say that triple = triple'. Although we have many definitions to express one function, we will find that there is only one definition that will prove to be the most efficient in the procedure of evaluation in the sense of the reducing the expression we discussed previously. Unfortunately, we cannot determine which one is the most efficient from our preceding two definitions since that depends on the characteristic of the evaluation mechanism. Forming the definition Now, let's go back to our discussion on definitions at the beginning of this chapter. We have the following definition in order to retrieve the value from the case analysis: max a b = a, if a ≥ b = b, if b > a There are two expressions in this definition, distinguished by a Boolean-value expression. This distinguisher is called guards, and we use them to evaluate the value of True or False. The first line is one of the alternative result values for this function. It states that the return value will be a if the expression a ≥ b is True. In contrast, the function will return value b if the expression b ≥ a is True. Using these two cases, a ≥ b and b ≥ a, the max value depends on the value of a and b. The order of the cases doesn't matter. We can also define the max function using the special word otherwise. This word ensures that the otherwise case will be executed if no expression results in a True value. Here, we will refactor our max function using the word otherwise: max a b = a, if a ≥ b = b, otherwise From the preceding function definition, we can see that if the first expression is False, the function will return b immediately without performing any evaluation. In other words, the otherwise case will always return True if all previous guards return False. Another special word usually used in mathematical notations is where. This word is used to set the local definition for the expression of the function. Let's take a look at the following example: f x y = (z + 2) * (z + 3) where z = x + y In the preceding example, we have a function f with variable z, whose value is determined by x and y. There, we introduce a local z definition to the function. This local definition can also be used along with the case analysis we have discussed earlier. Here is an example of the conjunction local definition with the case analysis: f x y = x + z, if x > 100 = x - z, otherwise where z = triple(y + 3) In the preceding function, there is a local z definition, which qualifies for both x + z and x – z expressions. As we discussed earlier, although the function has two equal to (=) signs, only one expression will return the value. Currying Currying is a simple technique of changing structure arguments by sequence. It will transform a n-ary function into n unary function. It is a technique which was created to circumvent limitations of Lambda functions which are unary functions Let's go back to our max function again and get the following definition: max a b = a, if a ≥ b = b, if b > a We can see that there is no bracket in the max a b function name. Also, there is no comma-separated a and b in the function name. We can add a bracket and a comma to the function definition, as follows: max' (a,b) = a, if a ≥ b = b, if b > a At first glance, we find the two functions to be the same since they have the same expression. However, they are different because of their different type. The max' function has a single argument, which consists of a pair of numbers. The type of max' function can be written as follows: max' :: (num, num) -> num On the other hand, the max function has two arguments. The type of this function can be written as follows: max :: num -> (num -> num) The max function will take a number and then return a function from single number to many numbers. From the preceding max function, we pass the variable a to the max function, which returns a value. Then, that value is compared to variable b in order to find the maximum number. Comparison between functional and imperative programming The main difference between functional and imperative programming is that imperative programming produces side-effects while functional programming doesn't. In Imperative programming, the expressions are evaluated and its resulting value is assigned to variables. So, when we group series of expressions into a function, the resulting value depends upon the state of variables at that point in time. This is called side effects. Because of the continues change in state, the order of evaluation matter. In Functional programming world, destructive assignment is forbidden and each time an assignment happens a new variable is induced. Concepts of functional programming We can also distinguish functional programming over imperative programming by the concepts. The core ideas of Functional programming are encapsulated in the constructs like First Class Functions, Higher Order Functions, Purity, Recursion over Loops, and Partial Functions. We will discuss the concepts in this topic. First-class and higher-order functions In Imperative programming, the given data is more importance and are passed through series of functions (with side effects). Functions are special constructs with its own semantics. In effect, functions do not have the same place as variables and constants. Since a function cannot be passed as parameter or not returned as a result, they are regarded as second class citizens of the programming world. In the functional programming world, we can pass function as a parameter and return function as a result. They obey the same semantics as variables and their values. Thus, they are First Class Citizens. We can also create function of functions called Second Order Function through Composition. There is no limit imposed on the composability of function and they are called Higher Order Functions. Fortunately, the C# language has supported these two concepts since it has a feature called function object, which has types and values. To discuss more details about the function object, let's take a look at the following code: class Program { static void Main(string[] args) { Func<int, int> f = (x) => x + 2; int i = f(1); Console.WriteLine(i); f = (x) => 2 * x + 1; i = f(1); Console.WriteLine(i); } } We can find the code in FuncObject.csproj, and if we run it, it will display the following output on the console screen: Why do we display it? Let's continue the discussion on function types and function values. Hit Ctrl + F5 instead of F5 in order to run the code in debug mode but without the debugger. It's useful to stop the console from closing on the exit. Pure functions In the functional programming, most of the functions do not have side-effects. In other words, the function doesn't change any variables outside the function itself. Also, it is consistent, which means that it always returns the same value for the same input data. The following are example actions that will generate side-effects in programming: Modifying a global variable or static variable since it will make a function interact with the outside world. Modifying the argument in a function. This usually happens if we pass a parameter as a reference. Raising an exception. Taking input and output outside—for instance, get a keystroke from the keyboard or write data to the screen. Although it does not satisfy the rule of a pure function, we will use many Console.WriteLine() methods in our program in order to ease our understanding in the code sample. The following is the sample non-pure function that we can find in NonPureFunction1.csproj: class Program { private static string strValue = "First"; public static void AddSpace(string str) { strValue += ' ' + str; } static void Main(string[] args) { AddSpace("Second"); AddSpace("Third"); Console.WriteLine(strValue); } } If we run the preceding code, as expected, the following result will be displayed on the console: In this code, we modify the strValue global variable inside the AddSpace function. Since it modifies the variable outside, it's not considered a pure function. Let's take a look at another non-pure function example in NonPureFunction2.csproj: class Program { public static void AddSpace(StringBuilder sb, string str) { sb.Append(' ' + str); } static void Main(string[] args) { StringBuilder sb1 = new StringBuilder("First"); AddSpace(sb1, "Second"); AddSpace(sb1, "Third"); Console.WriteLine(sb1); } } We see the AddSpace function again but this time with the addition of an argument-typed StringBuilder argument. In the function, we modify the sb argument with hyphen and str. Since we pass the sb variable by reference, it also modifies the sb1 variable in the Main function. Note that it will display the same output as NonPureFunction2.csproj. To convert the preceding two non-pure function code into pure function code, we can refactor the code to be the following. This code can be found at PureFunction.csproj: class Program { public static string AddSpace(string strSource, string str) { return (strSource + ' ' + str); } static void Main(string[] args) { string str1 = "First"; string str2 = AddSpace(str1, "Second"); string str3 = AddSpace(str2, "Third"); Console.WriteLine(str3); } } Running PureFunction.csproj, we will get the same output compared to the two previous non-pure function code. However, in this pure function code, we have three variables in the Main function. This is because in functional programming, we cannot modify the variable we have initialized earlier. In the AddSpace function, instead of modifying the global variable or argument, it now returns a string value to satisfy the the functional rule. The following are the advantages we will have if we implement the pure function in our code: Our code will be easy to be read and maintain because the function does not depend on external state and variables. It is also designed to perform specific tasks that increase maintainability. The design will be easier to be changed since it is easier to refactor. Testing and debugging will be easier since it's quite easy to isolate the pure function. Recursive functions In imperative programming world, we have got destructive assignment to mutate the state if a variable. By using loops, one can change multiple variables to achieve the computational objective. In Functional programming world, since variable cannot be destructively assigned, we need a Recursive function calls to achieve the objective of looping. Let's create a factorial function. In mathematical terms, the factorial of the nonnegative integer N is the multiplication of all positive integers less than or equal to N. This is usually denoted by N!. We can denote the factorial of 7 as follows: 7! = 7 x 6 x 5 x 4 x 3 x 2 x 1 = 5040 If we look deeper at the preceding formula, we will discover that the pattern of the formula is as follows: N! = N * (N-1) * (N-2) * (N-3) * (N-4) * (N-5) ... Now, let's take a look at the following factorial function in C#. It's an imperative approach and can be found in the RecursiveImperative.csproj file. public partial class Program { private static int GetFactorial(int intNumber) { if (intNumber == 0) { return 1; } return intNumber * GetFactorial(intNumber - 1); } } As we can see, we invoke the GetFactorial() function from the GetFactorial() function itself. This is what we call a recursive function. We can use this function by creating a Main() method containing the following code: public partial class Program { static void Main(string[] args) { Console.WriteLine( "Enter an integer number (Imperative approach)"); int inputNumber = Convert.ToInt32(Console.ReadLine()); int factorialNumber = GetFactorial(inputNumber); Console.WriteLine( "{0}! is {1}", inputNumber, factorialNumber); } } We invoke the GetFactorial() method and pass our desired number to the argument. The method will then multiply our number with what's returned by the GetFactorial() method, in which the argument has been subtracted by 1. The iteration will last until intNumber – 1 is equal to 0, which will return 1. Now, let's compare the preceding recursive function in the imperative approach with one in the functional approach. We will use the power of the Aggregate operator in the LINQ feature to achieve this goal. We can find the code in the RecursiveFunctional.csproj file. The code will look like what is shown in the following: class Program { static void Main(string[] args) { Console.WriteLine( "Enter an integer number (Functional approach)"); int inputNumber = Convert.ToInt32(Console.ReadLine()); IEnumerable<int> ints = Enumerable.Range(1, inputNumber); int factorialNumber = ints.Aggregate((f, s) => f * s); Console.WriteLine( "{0}! is {1}", inputNumber, factorialNumber); } } We initialize the ints variable, which contains a value from 1 to our desired integer number in the preceding code, and then we iterate ints using the Aggregate operator. The output of RecursiveFunctional.csproj will be completely the same compared to the output of RecursiveImperative.csproj. However, we use the functional approach in the code in RecursiveFunctional.csproj. The advantages and disadvantages of functional programming So far, we have had to deal with functional programming by creating code using functional approach. Now, we can look at the advantages of the functional approach, such as the following: The order of execution doesn't matter since it is handled by the system to compute the value we have given rather than the one defined by programmer. In other words, the declarative of the expressions will become unique. Because functional programs have an approach toward mathematical concepts, the system will designed with the notation as close as possible to the mathematical way of concept. Variables can be replaced by their value since the evaluation of expression can be done any time. The functional code is then more mathematically traceable because the program is allowed to be manipulated or transformed by substituting equals with equals. This feature is called Referential Transparency. Immutability makes the functional code free of side-effects. A shared variable, which is an example of a side-effect, is a serious obstacle for creating parallel code and result in non-deterministic execution. By removing the side-effect, we can have a good coding approach. The power of lazy evaluation will make the program run faster because it only provides what we really required for the queries result. Suppose we have a large amount of data and want to filter it by a specific condition, such as showing only the data that contains the word Name. In imperative programming, we will have to evaluate each operation of all the data. The problem is when the operation takes a long time, the program will need more time to run as well. Fortunately, the functional programming that applies LINQ will perform the filtering operation only when it is needed. That's why functional programming will save much of our time using lazy evaluation. We have a solution for complex problems using composability. It is a rule principle that manages a problem by dividing it, and it gives pieces of the problem to several functions. The concept is similar to a situation when we organize an event and ask different people to take up a particular responsibility. By doing this, we can ensure that everything will done properly by each person. Beside the advantages of functional programming, there are several disadvantages as well. Here are some of them: Since there's no state and no update of variables is allowed, loss of performance will take place. The problem occurs when we deal with a large data structure and it needs to perform a duplication of any data even though it only changes a small part of the data. Compared to imperative programming, much garbage will be generated in functional programming due to the concept of immutability, which needs more variables to handle specific assignments. Because we cannot control the garbage collection, the performance will decrease as well. Summary So we have been acquainted with the functional approach by discussing the introduction of functional programming. We also have compared the functional approach to the mathematical concept when we create functional program. It's now clear that the functional approach uses the mathematical approach to compose a functional program. The comparison between functional and imperative programming also led us to the important point of distinguishing the two. It's now clear that in functional programming, the programmer focuses on the kind of desired information and the kind of required transformation, while in the imperative approach, the programmer focuses on the way of performing the task and tracking changes in the state. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Why we need Design Patterns? [article] Parallel Computing [article]
Read more
  • 0
  • 0
  • 65539

article-image-storage-practices-and-migration-hyper-v-2016
Packt
01 Dec 2016
17 min read
Save for later

Storage Practices and Migration to Hyper-V 2016

Packt
01 Dec 2016
17 min read
In this article by Romain Serre, the author of Hyper-V 2016 Best Practices, we will learn about Why Hyper-V projects fail, Overview of the failover cluster, Storage Replica, Microsoft System Center, Migrating VMware virtual machines, and Upgrading single Hyper-v hosts. (For more resources related to this topic, see here.) Why Hyper-V projects fail Before you start deploying your first production Hyper-V host, make sure that you have completed a detailed planning phase. I have been called in to many Hyper-V projects to assist in repairing what a specialist has implemented. Most of the time, I start by correcting the design because the biggest failures happen there, but are only discovered later during implementation. I remember many projects in which I was called in to assist with installations and configurations during the implementation phases, because these were the project phases where a real expert was needed. However, based on experience, this notion is wrong. Most critical to a successful design phase are two features—its rare existence and someone with technological and organizational experience with Hyper-V. If you don't have the latter, look out for a Microsoft Partner with a Gold Competency called Management and Virtualization on Microsoft Pinpoint (http://pinpoint.microsoft.com) and take a quick look at the reviews done by customers for successful Hyper-V projects. If you think it's expensive to hire a professional, wait until you hire an amateur. Having an expert in the design phase is the best way to accelerate your Hyper-V project. Overview of the failover cluster Before you start your first deployment in production, make sure you have defined the aim of the project and its smart criteria and have done a thorough analysis of the current state. After this, you should be able to plan the necessary steps to reach the target state, including a pilot phase. instantly detected. The virtual machines running on that particular node are powered off immediately because of the hardware failure on their computing node. The remaining cluster nodes then immediately take over these VMs in an unplanned failover process and start them on their respective own hardware. The virtual machines will be the backup running after a successful boot of their operating systems and applications in just a few minutes. Hyper-V Failover Clusters work under the condition that all compute nodes have access to a shared storage instance, holding the virtual machine configuration data and its virtual hard disks. In case of a planned failover, that is, for patching compute nodes, it's possible to move running virtual machines from one cluster node to another without interrupting the VM. All cluster nodes can run virtual machines at the same time, as long as there is enough failover capacity running all services when a node goes down. Even though a Hyper-V cluster is still called a Failover Cluster, utilizing the Windows Server Failover Clustering feature, it is indeed capable of running an Active/Active Cluster. To ensure that all these capabilities of a Failover Cluster are indeed working, it demands an accurate planning and implementation process. Storage Replica Storage Replica is a new feature in Windows Server 2016 that provides block replication from storage level for a data recovery plan or for a stretched cluster. Storage Replica can be used in the following scenarios: Server-to-server storage replication using Storage Replica Storage replication in a stretch cluster using Storage Replica Cluster-to-cluster storage replication using Storage Replica Server-to-itself to replicate between volumes using Storage Replica Regarding the scenario or the bandwidth and the latency of the inter-site link, you can choose between a synchronous and an asynchronous replication. For further information about Storage Replica, you can about read this topic at http://bit.ly/2albebS. The SMB3 protocol is used to make Storage Replica. You can leverage TCP/IP or RDMA on the network. I recommend you to implement RDMA when it is possible to reduce latency and CPU workload and to increase throughput. Compared to Hyper-V Replica, the Storage Replica feature provides a replication of all Virtual Machines stored in a volume from the block level. Moreover, Storage Replica can replicate in the synchronous mode while Hyper-V Replica is always in the asynchronous mode. To finish, with Hyper-V Replica you have to specify the failover IP Address because the replication is executed from the VM level, whereas with Storage Replica, you don't need to specify the failover IP Address, but in case of a replication between two clusters in two different rooms, the VM networks must be configured in the destination room. The use of Hyper-V Replica or Storage Replica depends on the disaster recovery plan you need. If you want to protect some application workloads, you can use Hyper-V Replica. On the other hand, if you have the passive room ready to restart in case of issues in the active room, Storage Replica can be a great solution because all the VMs in a volume will be already replicated. To deploy a replication between two clusters you need two sets of storage based on iSCSI, SAS JBOD, fibre channel SAN, or Shared VHDX. For better performance I recommend you to implement SSD that will be used for the logs of Storage Replica. Microsoft System Center Microsoft System Center 2016 is Microsoft's solution for advanced management of Windows Server and its components, along with its dependencies such as various hardware and software products. It consists of various components that support every stage of your IT services from planning to operating over backup to automation. System Center has existed since 1994 and has evolved continuously. It now offers a great set of tools for very efficient management of server and client infrastructures. It also offers the ability to create and operate whole clouds—run in your own data center or a public cloud data center such as Microsoft Azure. Today, it's your choice whether to run your workloads on-premises or off-premises. System Center provides a standardized set of tools for a unique and consistent Cloud OS management experience. System Center does not add any new features to Hyper-V, but it does offer great ways to make the most out of it and ensure streamlined operating processes after its implementation. System Center is licensed via the same model as Windows Server is, leveraging Standard and data center Editions on a physical host level. While every System Center component offers great value in itself, the binding of multiple components into a single workflow offers even more advantages, as shown in the following screenshot: System Center overview When do you need System Center? There is no right or wrong answer to this, and the most given answer by any IT consultant around the world is, "It depends". System Center adds value to any IT environment starting with only a few systems. In my experience, a Hyper-V environment with up to three hosts, and 15 VMs can be managed efficiently without the use of System Center. If you plan to use more hosts or virtual machines, System Center will definitely be a great solution for you. Let's take a look at the components of System Center. Migrating VMware virtual machines If you are running virtual machines on VMware ESXi hosts, there are really good options available for moving them to Hyper-V. There are different approaches on how to convert a VMware virtual machine to Hyper-V: from the inside of the VM on a guest level, running cold conversions with the VM powered off on the host level, running hot conversions on a running VM, and so on. I will give you a short overview of the currently available tools in the market. System Center VMM SCVMM should not be the first tool of your choice, take a look at MVMC combined with MAT to get an equal functionality from a better working tool. The earlier versions of SCVMM allowed online or offline conversions of VMs; the current version, 2016, allows only offline conversions of VMs. Select a powered-off VM on a VMware host or from the SCVMM library share to start the conversion. The VM conversion will convert VMware-hosted virtual machines through vCenter and ensure that the entire configuration, such as memory, virtual processor, and other machine configurations, is also migrated from the initial source. The tool also adds virtual NICs to the deployed virtual machine on Hyper-V. The VMware tools must be uninstalled before the conversion because you won't be able to remove the VMware tools when the VM is not running on a VMware host. SCVMM 2016 supports ESXi hosts running 4.1 and 5.1 but not the latest ESX Version 5.5. SCVMM conversions are great to automate through their integrated PowerShell support and it's very easy to install upgraded Hyper-V integration services as part of the setup or by adding any kind of automation through PowerShell or System Center Orchestrator. Besides manually removing VMware tools, using SCVMM is an end-to-end solution in the migration process. You can find some PowerShell examples for SCVMM-powered V2V conversion scripts at http://bit.ly/Y4bGp8. I don't recommend the use of this tool anymore because Microsoft doesn't spend time on this tool anymore. Microsoft Virtual Machine Converter Microsoft released its first version of the free solution accelerator Microsoft Virtual Machine Converter (MVMC) in 2013, and it should be available in Version 3.1 by the release of this book. MVMC provides a small and easy option to migrate selected virtual machines to Hyper-V. It takes a very similar approach to the conversion as SCVMM does. The conversion happens at a host level and offers a fully integrated end-to-end solution. MVMC supports all recent versions of VMware vSphere. It will even uninstall the VMware tools and install the Hyper-V integration services. MVMC 2.0 works with all supported Hyper-V guest operating systems, including Linux. MVMC comes with a full GUI wizard as well as a fully scriptable command-line interface (CLI). Besides being a free tool, it is fully supported by Microsoft in case you experience any problems during the migration process. MVMC should be the first tool of your choice if you do not know which tool to use. Like most other conversion tools, it does the actual conversion on the MVMC server itself and requires its disk space to host the original VMware virtual disk as well as the converted Hyper-V disk. MVMC even offers an add-on for VMware virtual center servers to start conversions directly from the vSphere console. The current release of MV is freely available at its official download site at http://bit.ly/1HbRIg7. Download MVMC to the conversion system and start the click-through setup. After finishing the download, start the MVMC with the GUI by executing Mvmc.Gui.exe. The wizard guides you through some choices. MVMC is not only capable of migrating to Hyper-V but also allows you to move virtual machines to Microsoft Azure. Follow these few steps to convert a VMware VM: Select Hyper-V as a target. Enter the name of the Hyper-V host you want this VM to run on and specify a fileshare to use and the format of the disks you want to create. Choosing the dynamically expanding disks should be the best option most of the time. Enter the name of the ESXi server you want to use as a source as well as valid credentials. Select the virtual machine to convert. Make sure it has VMware tools installed. The VM can be either powered on or off. Enter a workspace folder to store the converted disk. Wait for the process to finish. There is some additional guidance available at http://bit.ly/1vBqj0U. This is a great and easy way to migrate a single virtual machine. Repeat the steps for every other virtual machine you have, or use some automation. Upgrading single Hyper-V hosts If you are currently running a single host with an older version of Hyper-V and now want to upgrade this host on the same hardware, there is a limited set of decisions to be made. You want to upgrade the host with the least amount of downtime and without losing any data from your virtual machine. Before you start the upgrade process, make sure all components from your infrastructure are compatible with the new version of Hyper-V. Then it's time to prepare your hardware for this new version of Hyper-V by upgrading all firmware to the latest available version and downloading the necessary drivers for Windows Server 2016 with Hyper-V along with its installation media. One of the most crucial questions in this update scenario is whether you should use the integrated installation option called in-place upgrade, where the existing operating system will be transformed to the recent version of Hyper-V or delete the current operating system and perform a clean installation. While the installation experience of in-place upgrades works well when only the Hyper-V role is installed, based on experiences, some versions of upgraded systems are more likely to suffer problems. Numbers pulled from the Elanity support database show about 15 percent more support cases on upgraded systems from Windows Server 2008 R2 than clean installations. Remember how fast and easy it is nowadays to do a clean install of Hyper-V; this is why it is highly recommended to do this over upgrading existing installations. If you are currently using Windows Server 2012 R2 and want to upgrade to Windows Server 2016, note that we have not yet seen any differences in the number of support cases between the installation methods. However, clean installations of Hyper-V being so fast and easy, I barely use them. Before starting any type of upgrade scenario, make sure you have current backups of all affected virtual machines. Nonetheless, if you want to use the in-place upgrade, insert the Windows Server 2016 installation media and run this command from your current operating system: Setup.exe /auto:upgrade If it fails, it's most likely due to an incompatible application installed on the older operating system. Start the setup without the parameter to find out which applications need to be removed before executing the unattended setup. If you upgrade from Windows Server 2012 R2, there is no additional preparation needed; if you upgrade from older operating systems, make sure to remove all snapshots from your virtual machines. Importing virtual machines If you choose to do a clean installation of the operating systems, you do not necessarily have to export the virtual machines first; just make sure all VMs are powered off and are stored on a different partition than your Hyper-V host OS. If you are using a SAN, disconnect all LUNs before the installation and reconnect them afterwards to ensure their integrity through the installation process. After the installation process, just reconnect the LUNs and set the disk online in diskpart or in Disk Management at Control Panel | Computer Management. If you are using local disks, make sure not to reformat the partition with your virtual machines on it. You should export VM to another location and import them back after reformatting; more efforts are required but it is safer. Set the partition online and then reimport the virtual machines. Before you start the reimport process, make sure all dependencies of your virtual machines are available, especially vSwitches. To import a single Hyper-V VM, use the following PowerShell cmdlet: Import-VM -Path 'D:VMsVM01Virtual Machines2D5EECDA-8ECC-4FFC- ACEE-66DAB72C8754.xml' To import all virtual machines from a specific folder, use this command: Get-ChildItem d:VMs -Recurse -Filter "Virtual Machines" | %{Get- ChildItem $_.FullName -Filter *.xml} | %{import-vm $_.FullName - Register} After that, all VMs are registered and ready for use on your new Hyper-V hosts. Make sure to update the Hyper-V integration services of all virtual machines before going back into production. If you still have virtual disks in the old .vhd format, it's now time to convert them to .vhdx files. Use this PowerShell cmdlet on powered-off VMs or standalone vDisk to convert a single .vhd file: Convert-VHD –Path d:VMstestvhd.vhd –DestinationPath d:VMstestvhdx.vhdx If you want to convert the disks of all your VMs, fellow MVPs, Aidan Finn and Didier van Hoye, provided a great end-to-end solution to achieve this. This can be found at http://bit.ly/1omOagi. I often hear from customers that they don't want to upgrade their disks, so as to be able to revert to older versions of Hyper-V when needed. First, you should know that I have never met a customer who has done that because there really is no technical reason why anyone should do this. Second, even if you would do this backwards move, running virtual machines on older Hyper-V hosts is not supported, if they had been deployed on more modern versions of Hyper-V before. The reason for this is very simple; Hyper-V does not offer a way for downgrading Hyper-V integration services. The only way to move a virtual machine back to an older Hyper-V host is by restoring a backup of the VM made earlier before the upgrade process. Exporting virtual machines If you want to use another physical system running a newer version of Hyper-V, you have multiple possible options. They are as follows: When using a SAN as a shared storage, make sure all your virtual machines, including their virtual disks, are located on other LUNs rather than on the host operating system. Disconnect all LUNs hosting virtual machines from the source host and connect them to the target host. Bulk import the VMs from the specified folders. When using SMB3 shared storage from scale-out file servers, make sure to switch access to the shared hosting VMs to the new Hyper-V hosts. When using local hard drives and upgrading from Windows Server 2008 SP2 or Windows Server 2008 R2 with Hyper-V, it's necessary to export the virtual machines to a storage location reachable from the new host. Hyper-V servers running legacy versions of the OS (prior to 2012 R2) need to power off the VMs before an export can occur. To export a virtual machine from a host, use the following PowerShell cmdlet: Export-VM –Name VM –Path D: To export all virtual machines to a folder underneath the following root, use the following command: Get-VM | Export-VM –Path D: In most cases, it is also possible to just copy the virtual machine folders containing virtual hard disk's and configuration files to the target location and import them to Windows Server 2016 Hyper-V hosts. However, the export method is more reliable and should be preferred. A good alternative for moving virtual machines can be the recreation of virtual machines. If you have another host up and running with a recent version of Hyper-V, it may be a good opportunity to also upgrade some guest OSes. For instance, Windows Server 2003 and 2003 R2 are out of extended support since July 2015. Depending on your applications, it may now be the right choice to create new virtual machines with Windows Server 2016 as a guest operating system and migrate your existing workloads from older VMs to these new machines. Summary In this article, we learned about Why Hyper-V projects fail, how to migrate VMware virtual machine and also about upgrading single Hyper-V hosts. This article also covers the overview of the failover cluster and Storage Replica. Resources for Article: Further resources on this subject: Hyper-V Basics [article] The importance of Hyper-V Security [article] Hyper-V building blocks for creating your Microsoft virtualization platform [article]
Read more
  • 0
  • 0
  • 18808
Modal Close icon
Modal Close icon