Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-aspnet-social-networks-making-friends-part-2
Packt
22 Oct 2009
18 min read
Save for later

ASP.NET Social Networks—Making Friends (Part 2)

Packt
22 Oct 2009
18 min read
Implementing the presentation layer Now that we have the base framework in place, we can start to discuss what it will take to put it all together. Searching for friends Let's see what it takes to implement a search for friends. SiteMaster Let's begin with searching for friends. We haven't covered too much regarding the actual UI and nothing regarding the master page of this site. Putting in simple words, we have added a text box and a button to the master page to take in a search phrase. When the button is clicked, this method in the MasterPage code behind is fired. protected void ibSearch_Click(object sender, EventArgs e){ _redirector.GoToSearch(txtSearch.Text);} As you can see it simply calls the Redirector class and routes the user to the Search.aspx page passing in the value of txtSearch (as a query string parameter in this case). public void GoToSearch(string SearchText){ Redirect("~/Search.aspx?s=" + SearchText); } Search The Search.aspx page has no interface. It expects a value to be passed in from the previously discussed text box in the master page. With this text phrase we hit our AccountRepository and perform a search using the Contains() operator. The returned list of Accounts is then displayed on the page. For the most part, this page is all about MVP (Model View Presenter) plumbing. Here is the repeater that displays all our data. <%@ Register Src="~/UserControls/ProfileDisplay.ascx" TagPrefix="Fisharoo" TagName="ProfileDisplay" %>...<asp:Repeater ID="repAccounts" runat="server" OnItemDataBound="repAccounts_ItemDataBound"> <ItemTemplate> <Fisharoo:ProfileDisplay ShowDeleteButton="false" ID="pdProfileDisplay" runat="server"> </Fisharoo:ProfileDisplay> </ItemTemplate></asp:Repeater> The fun stuff in this case comes in the form of the ProfileDisplay user control that was created so that we have an easy way to display profile data in various places with one chunk of reusable code that will allow us to make global changes. A user control is like a small self-contained page that you can then insert into your page (or master page). It has its own UI and it has its own code behind (so make sure it also gets its own MVP plumbing!). Also, like a page, it is at the end of the day a simple object, which means that it can have properties, methods, and everything else that you might think to use. Once you have defined a user control you can use it in a few ways. You can programmatically load it using the LoadControl() method and then use it like you would use any other object in a page environment. Or like we did here, you can add a page declaration that registers the control for use in that page. You will notice that we specified where the source for this control lives. Then we gave it a tag prefix and a tag name (similar to using asp:Control). From that point onwards we can refer to our control in the same way that we can declare a TextBox! You should see that we have <Fisharoo:ProfileDisplay ... />. You will also notice that our tag has custom properties that are set in the tag definition. In this case you see ShowDeleteButton="false". Here is the user control code in order of display, code behind, and the presenter: //UserControls/ProfileDisplay.ascx<%@ Import namespace="Fisharoo.FisharooCore.Core.Domain"%><%@ Control Language="C#" AutoEventWireup="true" CodeBehind="ProfileDisplay.ascx.cs" Inherits="Fisharoo.FisharooWeb.UserControls.ProfileDisplay" %><div style="float:left;"> <div style="height:130px;float:left;"> <a href="/Profiles/Profile.aspx?AccountID=<asp:Literal id='litAccountID' runat='server'></asp:Literal>"> <asp:Image style="padding:5px;width:100px;height:100px;" ImageAlign="Left" Width="100" Height="100" ID="imgAvatar" ImageUrl="~/images/ProfileAvatar/ProfileImage.aspx" runat="server" /></a> <asp:ImageButton ImageAlign="AbsMiddle" ID="ibInviteFriend" runat="server" Text="Become Friends" OnClick="lbInviteFriend_Click" ImageUrl="~/images/icon_friends.gif"></asp:ImageButton> <asp:ImageButton ImageAlign="AbsMiddle" ID="ibDelete" runat="server" OnClick="ibDelete_Click" ImageUrl="~/images/icon_close.gif" /><br /> <asp:Label ID="lblUsername" runat="server"></asp:Label><br /> <asp:Label ID="lblFirstName" runat="server"></asp:Label> <asp:Label ID="lblLastName" runat="server"></asp:Label><br /> Since: <asp:Label ID="lblCreateDate" runat="server"></asp:Label><br /> <asp:Label ID="lblFriendID" runat="server" Visible="false"></asp:Label> </div> </div>//UserControls/ProfileDisplay.ascx.csusing System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooCore.Core.Domain;using Fisharoo.FisharooWeb.UserControls.Interfaces;using Fisharoo.FisharooWeb.UserControls.Presenters;namespace Fisharoo.FisharooWeb.UserControls{ public partial class ProfileDisplay : System.Web.UI.UserControl, IProfileDisplay { private ProfileDisplayPresenter _presenter; protected Account _account; protected void Page_Load(object sender, EventArgs e) { _presenter = new ProfileDisplayPresenter(); _presenter.Init(this); ibDelete.Attributes.Add("onclick","javascript:return confirm('Are you sure you want to delete this friend?')"); } public bool ShowDeleteButton { set { ibDelete.Visible = value; } } public bool ShowFriendRequestButton { set { ibInviteFriend.Visible = value; } } public void LoadDisplay(Account account) { _account = account; ibInviteFriend.Attributes.Add("FriendsID",_account.AccountID.ToString()); ibDelete.Attributes.Add("FriendsID", _account.AccountID.ToString()); litAccountID.Text = account.AccountID.ToString(); lblLastName.Text = account.LastName; lblFirstName.Text = account.FirstName; lblCreateDate.Text = account.CreateDate.ToString(); imgAvatar.ImageUrl += "?AccountID=" + account.AccountID.ToString(); lblUsername.Text = account.Username; lblFriendID.Text = account.AccountID.ToString(); } protected void lbInviteFriend_Click(object sender, EventArgs e) { _presenter = new ProfileDisplayPresenter(); _presenter.Init(this); _presenter.SendFriendRequest(Convert.ToInt32(lblFriendID.Text)); } protected void ibDelete_Click(object sender, EventArgs e) { _presenter = new ProfileDisplayPresenter(); _presenter.Init(this); _presenter.DeleteFriend(Convert.ToInt32(lblFriendID.Text)); } }}//UserControls/Presenter/ProfileDisplayPresenter.csusing System;using System.Data;using System.Configuration;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooCore.Core;using Fisharoo.FisharooCore.Core.DataAccess;using Fisharoo.FisharooWeb.UserControls.Interfaces;using StructureMap;namespace Fisharoo.FisharooWeb.UserControls.Presenters{ public class ProfileDisplayPresenter { private IProfileDisplay _view; private IRedirector _redirector; private IFriendRepository _friendRepository; private IUserSession _userSession; public ProfileDisplayPresenter() { _redirector = ObjectFactory.GetInstance<IRedirector>(); _friendRepository = ObjectFactory.GetInstance<IFriendRepository>(); _userSession = ObjectFactory.GetInstance<IUserSession>(); } public void Init(IProfileDisplay view) { _view = view; } public void SendFriendRequest(Int32 AccountIdToInvite) { _redirector.GoToFriendsInviteFriends(AccountIdToInvite); } public void DeleteFriend(Int32 FriendID) { if (_userSession.CurrentUser != null) { _friendRepository.DeleteFriendByID(_userSession.CurrentUser.AccountID , FriendID); HttpContext.Current.Response.Redirect(HttpContext.Current.Request.Raw Url); } } }} All this logic and display is very standard. You have the MVP plumbing, which makes up most of it. Outside of that you will notice that the ProfileDisplay control has a LoadDisplay() method responsible for loading the UI for that control. In the Search page this is done in the repAccounts_ItemDataBound() method. protected void repAccounts_ItemDataBound(object sender, RepeaterItemEventArgs e){ if(e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { ProfileDisplay pd = e.Item.FindControl("pdProfileDisplay") as ProfileDisplay; pd.LoadDisplay((Account)e.Item.DataItem); if(_webContext.CurrentUser == null) pd.ShowFriendRequestButton = false; }} The ProfileDisplay control also has a couple of properties one to show/hide the delete friend button and the other to show/hide the invite friend button. These buttons are not appropriate for every page that the control is used in. In the search results page we want to hide the Delete button as the results are not necessarily friends. We would want to be able to invite them in that view. However, in a list of our friends the Invite button (to invite a friend) would no longer be appropriate as each of these users would already be a friend. The Delete button in this case would now be more appropriate. Clicking on the Invite button makes a call to the Redirector class and routes the user to the InviteFriends page. //UserControls/ProfileDisplay.ascx.cspublic void SendFriendRequest(Int32 AccountIdToInvite){ _redirector.GoToFriendsInviteFriends(AccountIdToInvite);}//Core/Impl/Redirector.cspublic void GoToFriendsInviteFriends(Int32 AccoundIdToInvite){ Redirect("~/Friends/InviteFriends.aspx?AccountIdToInvite=" + AccoundIdToInvite.ToString());} Inviting your friends This page allows us to manually enter email addresses of friends whom we want to invite. It is a standard From, To, Message format where the system specifies the sender (you), you specify who to send to and the message that you want to send. //Friends/InviteFriends.aspx<%@ Page Language="C#" MasterPageFile="~/SiteMaster.Master" AutoEventWireup="true" CodeBehind="InviteFriends.aspx.cs" Inherits="Fisharoo.FisharooWeb.Friends.InviteFriends" %><asp:Content ContentPlaceHolderID="Content" runat="server"> <div class="divContainer"> <div class="divContainerBox"> <div class="divContainerTitle">Invite Your Friends</div> <asp:Panel ID="pnlInvite" runat="server"> <div class="divContainerRow"> <div class="divContainerCellHeader">From:</div> <div class="divContainerCell"><asp:Label ID="lblFrom" runat="server"></asp:Label></div> </div> <div class="divContainerRow"> <div class="divContainerCellHeader">To:<br /><div class="divContainerHelpText">(use commas to<BR />separate emails)</div></div> <div class="divContainerCell"><asp:TextBox ID="txtTo" runat="server" TextMode="MultiLine" Columns="40" Rows="5"></asp:TextBox></div> </div> <div class="divContainerRow"> <div class="divContainerCellHeader">Message:</div> <div class="divContainerCell"><asp:TextBox ID="txtMessage" runat="server" TextMode="MultiLine" Columns="40" Rows="10"></asp:TextBox></div> </div> <div class="divContainerFooter"> <asp:Button ID="btnInvite" runat="server" Text="Invite" OnClick="btnInvite_Click" /> </div> </asp:Panel> <div class="divContainerRow"> <div class="divContainerCell"><br /><asp:Label ID="lblMessage" runat="server"> </asp:Label><br /><br /></div> </div> </div> </div></asp:Content> Running the code will display the following: This is a simple page, so the majority of the code for it is MVP plumbing. The most important part to notice here is that when the Invite button is clicked the presenter is notified to send the invitation. //Friends/InviteFriends.aspx.csusing System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooWeb.Friends.Interface;using Fisharoo.FisharooWeb.Friends.Presenter;namespace Fisharoo.FisharooWeb.Friends{ public partial class InviteFriends : System.Web.UI.Page, IInviteFriends { private InviteFriendsPresenter _presenter; protected void Page_Load(object sender, EventArgs e) { _presenter = new InviteFriendsPresenter(); _presenter.Init(this); } protected void btnInvite_Click(object sender, EventArgs e) { _presenter.SendInvitation(txtTo.Text,txtMessage.Text); } public void DisplayToData(string To) { lblFrom.Text = To; } public void TogglePnlInvite(bool IsVisible) { pnlInvite.Visible = IsVisible; } public void ShowMessage(string Message) { lblMessage.Text = Message; } public void ResetUI() { txtMessage.Text = ""; txtTo.Text = ""; } }} Once this call is made we leap across to the presenter (more plumbing!). //Friends/Presenter/InviteFriendsPresenter.csusing System;using System.Data;using System.Configuration;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooCore.Core;using Fisharoo.FisharooCore.Core.DataAccess;using Fisharoo.FisharooCore.Core.Domain;using Fisharoo.FisharooWeb.Friends.Interface;using StructureMap;namespace Fisharoo.FisharooWeb.Friends.Presenter{ public class InviteFriendsPresenter { private IInviteFriends _view; private IUserSession _userSession; private IEmail _email; private IFriendInvitationRepository _friendInvitationRepository; private IAccountRepository _accountRepository; private IWebContext _webContext; private Account _account; private Account _accountToInvite; public void Init(IInviteFriends view) { _view = view; _userSession = ObjectFactory.GetInstance<IUserSession>(); _email = ObjectFactory.GetInstance<IEmail>(); _friendInvitationRepository = ObjectFactory.GetInstance< IFriendInvitationRepository>(); _accountRepository = ObjectFactory.GetInstance<IAccountRepository>(); _webContext = ObjectFactory.GetInstance<IWebContext>(); _account = _userSession.CurrentUser; if (_account != null) { _view.DisplayToData(_account.FirstName + " " + _account.LastName + " &lt;" + _account.Email + "&gt;"); if (_webContext.AccoundIdToInvite > 0) { _accountToInvite = _accountRepository.GetAccountByID (_webContext.AccoundIdToInvite); if (_accountToInvite != null) { SendInvitation(_accountToInvite.Email, _account.FirstName + " " + _account.LastName + " would like to be your friend!"); _view.ShowMessage(_accountToInvite.Username + " has been sent a friend request!"); _view.TogglePnlInvite(false); } } } } public void SendInvitation(string ToEmailArray, string Message) { string resultMessage = "Invitations sent to the following recipients:<BR>"; resultMessage += _email.SendInvitations (_userSession.CurrentUser,ToEmailArray, Message); _view.ShowMessage(resultMessage); _view.ResetUI(); } }} The interesting thing here is the SendInvitation() method, which takes in a comma delimited array of emails and the message to be sent in the invitation. It then makes a call to the Email.SendInvitations() method. //Core/Impl/Email.cspublic string SendInvitations(Account sender, string ToEmailArray, string Message){ string resultMessage = Message; foreach (string s in ToEmailArray.Split(',')) { FriendInvitation friendInvitation = new FriendInvitation(); friendInvitation.AccountID = sender.AccountID; friendInvitation.Email = s; friendInvitation.GUID = Guid.NewGuid(); friendInvitation.BecameAccountID = 0; _friendInvitationRepository.SaveFriendInvitation(friendInvitation); //add alert to existing users alerts Account account = _accountRepository.GetAccountByEmail(s); if(account != null) { _alertService.AddFriendRequestAlert(_userSession.CurrentUser, account, friendInvitation.GUID, Message); } //TODO: MESSAGING - if this email is already in our system add a message through messaging system //if(email in system) //{ // add message to messaging system //} //else //{ // send email SendFriendInvitation(s, sender.FirstName, sender.LastName, friendInvitation.GUID.ToString(), Message); //} resultMessage += "• " + s + "<BR>"; } return resultMessage;} This method is responsible for parsing out all the emails, creating a new FriendInvitation, and sending the request via email to the person who was invited. It then adds an alert to the invited user if they have an Account. And finally we have to add a notification to the messaging system once it is built. Outlook CSV importer The Import Contacts page is responsible for allowing our users to upload an exported contacts file from MS Outlook into our system. Once they have imported their contacts, the user is allowed to select which email addresses are actually invited into our system. Importing contacts As this page is made up of a couple of views, let's begin with the initial view. //Friends/OutlookCsvImporter.aspx<asp:Panel ID="pnlUpload" runat="server"> <div class="divContainerTitle">Import Contacts</div> <div class="divContainerRow"> <div class="divContainerCellHeader">Contacts File:</div> <div class="divContainerCell"><asp:FileUpload ID="fuContacts" runat="server" /></div> </div> <div class="divContainerRow"> <div class="divContainerFooter"><asp:Button ID="btnUpload" Text="Upload & Preview Contacts" runat="server" OnClick="btnUpload_Click" /></div> </div> <br /><br /> <div class="divContainerRow"> <div class="divContainerTitle">How do I export my contacts from Outlook?</div> <div class="divContainerCell"> <ol> <li> Open Outlook </li> <li> In the File menu choose Import and Export </li> <li> Choose export to a file and click next </li> <li> Choose comma seperated values and click next </li> <li> Select your contacts and click next </li> <li> Browse to the location you want to save your contacts file </li> <li> Click finish </li> </ol> </div> </div></asp:Panel> As you can see from the code we are working in panels here. This panel is responsible for allowing a user to upload their Contacts CSV File. It also gives some directions to the user as to how to go about exporting contacts from Outlook. This view has a file upload box that allows the user to browse for their CSV file, and a button to tell us when they are ready for the upload. There is a method in our presenter that handles the button click from the view. //Friends/Presenter/OutlookCsvImporterPresenter.cspublic void ParseEmails(HttpPostedFile file){ using (Stream s = file.InputStream) { StreamReader sr = new StreamReader(s); string contacts = sr.ReadToEnd(); _view.ShowParsedEmail(_email.ParseEmailsFromText(contacts)); }} This method is responsible for handling the upload process of the HttpPostedFile. It puts the file reference into a StreamReader and then reads the stream into a string variable named contacts. Once we have the entire list of contacts we can then call into our Email class and parse all the emails out. //Core/Impl/Email.cspublic List<string> ParseEmailsFromText(string text){ List<string> emails = new List<string>(); string strRegex = @"w+([-+.]w+)*@w+([-.]w+)*.w+([-.]w+)*"; Regex re = new Regex(strRegex, RegexOptions.Multiline); foreach (Match m in re.Matches(text)) { string email = m.ToString(); if(!emails.Contains(email)) emails.Add(email); } return emails;} This method expects a string that contains some email addresses that we want to parse. It then parses the emails using a regular expression (which we won't go into details about!). We then iterate through all the matches in the Regex and add the found email addresses to our list provided they aren't already present. Once we have found all the email addresses, we will return the list of unique email addresses. The presenter then passes that list of parsed emails to the view. Selecting contacts Once we have handled the upload process and parsed out the emails, we then need to display all the emails to the user so that they can select which ones they want to invite. Now you could do several sneaky things here. Technically the user has uploaded all of their email addresses to you. You have them. You could store them. You could invite every single address regardless of what the user wants. And while this might benefit your community over the short run, your users would eventually find out about your sneaky practice and your community would start to dwindle. Don't take advantage of your user's trust! //Friends/OutlookCsvImporter.aspx<asp:Panel visible="false" ID="pnlEmails" runat="server"> <div class="divContainerTitle">Select Contacts</div> <div class="divContainerFooter"><asp:Button ID="btnInviteContacts1" runat="server" OnClick="btnInviteContacts_Click" Text="Invite Selected Contacts" /></div> <div class="divContainerCell" style="text-align:left;"> <asp:CheckBoxList ID="cblEmails" RepeatColumns="2" runat="server"></asp:CheckBoxList> </div> <div class="divContainerFooter"><asp:Button ID="btnInviteContacts2" runat="server" OnClick="btnInviteContacts_Click" Text="Invite Selected Contacts" /></div></asp:Panel> Notice that we have a checkbox list in our panel. This checkbox list is bound to the returned list of email addresses. public void ShowParsedEmail(List<string> Emails){ pnlUpload.Visible = false; pnlResult.Visible = false; pnlEmails.Visible = true; cblEmails.DataSource = Emails; cblEmails.DataBind();} The output so far looks like this: Now the user has a list of all the email addresses that they uploaded, which they can then go through selecting the ones that they want to invite into our system. Once they are through selecting the emails that they want to invite, they can click on the Invite button. We then iterate through all the items in the checkbox list to locate the selected items. protected void btnInviteContacts_Click(object sender, EventArgs e){ string emails = ""; foreach (ListItem li in cblEmails.Items) { if(li != null && li.Selected) emails += li.Text + ","; } emails = emails.Substring(0, emails.Length - 1); _presenter.InviteContacts(emails);} Once we have gathered all the selected emails, we pass them to the presenter to run the invitation process. public void InviteContacts(string ToEmailArray){ string result = _email.SendInvitations(_userSession.CurrentUser, ToEmailArray, ""); _view.ShowInvitationResult(result);} The presenter promptly passes the selected items to the Email class to handle the invitations. This is the same method that we used in the last section to invite users. //Core/Impl/Email.cspublic string SendInvitations(Account sender, string ToEmailArray, string Message){...} We then output the result of the emails that we invited into the third display. <asp:Panel ID="pnlResult" runat="server" Visible="false"> <div class="divContainerTitle">Invitations Sent!</div> <div class="divContainerCell"> Invitations were sent to the following emails:<br /> <asp:Label ID="lblMessage" runat="server"></asp:Label> </div></asp:Panel>
Read more
  • 0
  • 0
  • 4768

article-image-software-task-management-tool-rake
Packt
16 Apr 2014
5 min read
Save for later

The Software Task Management Tool - Rake

Packt
16 Apr 2014
5 min read
(For more resources related to this topic, see here.) Installing Rake As Rake is a Ruby library, you should first install Ruby on the system if you don't have it installed already. The installation process is different for each operating system. However, we will see the installation example only for the Debian operating system family. Just open the terminal and write the following installation command: $ sudo apt-get install ruby If you have an operating system that doesn't contain the apt-get utility and if you have problems with the Ruby installation, please refer to the official instructions at https://www.ruby-lang.org/en/installation. There are a lot of ways to install Ruby, so please choose your operating system from the list on this page and select your desired installation method. Rake is included in the Ruby core as Ruby 1.9, so you don't have to install it as a separate gem. However, if you still use Ruby 1.8 or an older version, you will have to install Rake as a gem. Use the following command to install the gem: $ gem install rake The Ruby release cycle is slower than that of Rake and sometimes, you need to install it as a gem to work around some special issues. So you can still install Rake as a gem and in some cases, this is a requirement even for Ruby Version 1.9 and higher. To check if you have installed it correctly, open your terminal and type the following command: $ rake --version This should return the installed Rake version. The next sign that Rake is installed and is working correctly is an error that you see after typing the rake command in the terminal: $ mkdir ~/test-rake $ cd ~/test-rake $ rake rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) (See full trace by running task with --trace) Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Introducing rake tasks From the previous error message, it's clear that first you need to have Rakefile. As you can see, there are four variants of its name: rakefile, Rakefile, rakefile.rb, and Rakefile.rb. The most popularly used variant is Rakefile. Rails also uses it. However, you can choose any variant for your project. There is no convention that prohibits the user from using any of the four suggested variants. Rakefile is a file that is required for any Rake-based project. Apart from the fact that its content usually contains DSL, it's also a general Ruby file. Also, you can write any Ruby code in it. Perform the following steps to get started: Let's create a Rakefile in the current folder, which will just say Hello Rake, using the following commands: $ echo "puts 'Hello Rake'" > Rakefile $ cat Rakefile puts 'Hello Rake' Here, the first line creates a Rakefile with the content, puts 'Hello Rake', and the second line just shows us its content to make sure that we've done everything correctly. Now, run rake as we tried it before, using the following command: $ rake Hello Rake rake aborted! Don't know how to build task 'default' (See full trace by running task with --trace) The message has changed and it says Hello Rake. Then, it gets aborted because of another error message. At this moment, we have made the first step in learning Rake. Now, we have to define a default rake task that will be executed when you try to start Rake without any arguments. To do so, open your editor and change the created Rakefile with the following content: task :default do puts 'Hello Rake' end Now, run rake again: $ rake Hello, Rake The output that says Hello, Rake demonstrates that the task works correctly. The command-line arguments The most commonly used rake command-line argument is -T. It shows us a list of available rake tasks that you have already defined. We have defined the default rake task, and if we try to show the list of all rake tasks, it should be there. However, take a look at what happens in real life using the following command: $ rake -T The list is empty. Why? The answer lies within Rake. Run the rake command with the -h option to get the whole list of arguments. Pay attention to the description of the -T option, as shown in the following command-line output: -T, --tasks [PATTERN] Display the tasks (matching optional PATTERN) with descriptions, then exit. You can get more information on Rake in the repository at the following GitHub link at https://github.com/jimweirich/rake. The word description is the cornerstone here. It's a new term that we should know. Additionally, there is also an optional description to name a rake task. However, it's recommended that you define it because you won't see the list of all the defined rake tasks that we've already seen. It will be inconvenient for you to read your Rakefile every time you try to run some rake task. Just accept it as a rule: always leave a description for the defined rake tasks. Now, add a description to your rake tasks with the desc method call, as shown in the following lines of code: desc "Says 'Hello, Rake'" task :default do puts 'Hello, Rake.' end As you see, it's rather easy. Run the rake -T command again and you will see an output as shown: $ rake -T rake default # Says 'Hello, Rake' If you want to list all the tasks even if they don't have descriptions, you can pass an -A option with the -T option to the rake command. The resulting command will look like this: rake -T -A.
Read more
  • 0
  • 0
  • 4767

article-image-backtrack-5-advanced-wlan-attacks
Packt
13 Sep 2011
4 min read
Save for later

BackTrack 5: Advanced WLAN Attacks

Packt
13 Sep 2011
4 min read
  (For more resources on BackTrack, see here.) Man-in-the-Middle attack MITM attacks are probably one of most potent attacks on a WLAN system. There are different configurations that can be used to conduct the attack. We will use the most common one—the attacker is connected to the Internet using a wired LAN and is creating a fake access point on his client card. This access point broadcasts an SSID similar to a local hotspot in the vicinity. A user may accidently get connected to this fake access point and may continue to believe that he is connected to the legitimate access point. The attacker can now transparently forward all the user's traffic over the Internet using the bridge he has created between the wired and wireless interfaces. In the following lab exercise, we will simulate this attack. Time for action – Man-in-the-Middle attack Follow these instructions to get started: To create the Man-in-the-Middle attack setup, we will first c create a soft access point called mitm on the hacker laptop using airbase-ng. We run the command airbase-ng --essid mitm –c 11 mon0: It is important to note that airbase-ng when run, creates an interface at0 (tap interface). Think of this as the wired-side interface of our software-based access point mitm. Let us now create a bridge on the hacker laptop, consisting of the wired (eth0) and wireless interface (at0). The succession of commands used for this are—brctl addbr mitm-bridge, brctl addif mitm-bridge eth0, brctl addif mitmbridge at0, ifconfig eth0 0.0.0.0 up, ifconfig at0 0.0.0.0 up: We can assign an IP address to this bridge and check the connectivity with the gateway. Please note that we could do the same using DHCP as well. We can assign an IP address to the bridge interface with the command—ifconfig mitm-bridge 192.168.0.199 up. We can then try pinging the gateway 192.168.0.1 to ensure we are connected to the rest of the network: Let us now turn on IP Forwarding in the kernel so that routing and packet forwarding can happen correctly using echo > 1 /proc/sys/net/ipv4/ip_forward: Now let us connect a wireless client to our access point mitm. It would automatically get an IP address over DHCP (server running on the wired-side gateway). The client machine in this case receives the IP address 192.168.0.197. We can ping the wired side gateway 192.168.0.1 to verify connectivity: We see that the host responds to the ping requests as seen: We can also verify that the client is connected by looking at the airbase-ng terminal on the hacker machine: It is interesting to note here that because all the traffic is being relayed from the wireless interface to the wired-side, we have full control over the traffic. We can verify this by starting Wireshark and start sniffing on the at0 interface: (Move the mouse over the image to enlarge it.) Let us now ping the gateway 192.168.0.1 from the client machine. We can now see the packets in Wireshark (apply a display filter for ICMP), even though the packets are not destined for us. This is the power of Man-in-the-Middle attacks! (Move the mouse over the image to enlarge it.) What just happened? We have successfully created the setup for a wireless Man-In-The-Middle attack. We did this by creating a fake access point and bridging it with our Ethernet interface. This ensured that any wireless client connecting to the fake access point would "perceive" that it is connected to the Internet via the wired LAN. Have a go hero – Man-in-the-Middle over pure wireless In the previous exercise, we bridged the wireless interface with a wired one. As we noted earlier, this is one of the possible connection architectures for an MITM. There are other combinations possible as well. An interesting one would be to have two wireless interfaces, one creates the fake access point and the other interface is connected to the authorized access point. Both these interfaces are bridged. So, when a wireless client connects to our fake access point, it gets connected to the authorized access point through the attacker machine. Please note that this configuration would require the use of two wireless cards on the attacker laptop. Check if you can conduct this attack using the in-built card on your laptop along with the external one. This should be a good challenge!
Read more
  • 0
  • 0
  • 4748

article-image-php-5-social-networking-private-messages
Packt
26 Oct 2010
7 min read
Save for later

PHP 5 Social Networking: Private Messages

Packt
26 Oct 2010
7 min read
PHP 5 Social Networking We obviously need to keep private messages separate from the rest of the site, and ensure that they are only accessible to the sender and the receiver. While we could alter the public messages feature developed earlier, this would raise a few issues, such as being more difficult to tell whether the message being sent or read was private, and when using the Internet in a public area, the message would be shown on the area of the social network the user would most likely be visiting, which isn't ideal for private information. Because private messages will be separate from statuses, and won't need to make use of other media types to make them more interesting (though, we could set them up to make use of other media if we wanted), it makes sense for us to also use separate database tables and models for this feature. Database Our database needs provisions for the sender of the message, the recipient of the message, the subject of the message, and of course the message itself. We should also provide for if the message has been read, when the message was sent, and an ID for the message. The following illustrates a suitable structure for a messages table in our database:   Field Type Description ID Integer, Autoincrement, Primary Key Reference ID for the message Sender Integer The sender of the message Recipient Integer The recipient of the message Subject Varchar The subject the message relates to Sent Timestamp When the message was sent Message Longtext The contents of the message itself Read Boolean Indicates whether the message has been read or not More than one recipient? This database structure, and the code that follows, only supports one recipient per message. Our users might want to send to more than one recipient—feel free to add this functionality if you wish. Message model As with the majority of our database access, we require a model (models/message. php) to create, update, and retrieve message-related data from the database and encapsulate it within itself. It would also be helpful if the model pulled in a little more information from the database, including: A more user friendly representation of the date (we can get this via the MySQL DATE_FORMAT function) The name of the sender, by joining the messages table to the profile table The name of the recipient, by joining the messages table to the profile table again The first part of our model simply defines the class variables: <?php /** * Private message class */ class Message { /** * The registry object */ private $registry; /** * ID of the message */ private $id=0; /** * ID of the sender */ private $sender; /** * Name of the sender */ private $senderName; /** * ID of the recipient */ private $recipient; /** * Name of the recipient */ private $recipientName; /** * Subject of the message */ private $subject; /** * When the message was sent (TIMESTAMP) */ private $sent; /** * User readable, friendly format of the time the message was sent */ private $sentFriendlyTime; /** * Has the message been read */ private $read=0; /** * The message content itself */ private $message; The constructor takes the registry and ID of the message as parameters, if the ID has been defined, then it queries the database and sets the class variables. The database query here also formats a copy of the date into a friendlier format, and looks up the names of the sender and recipient of the message: /** * Message constructor * @param Registry $registry the registry object * @param int $id the ID of the message * @return void */ public function __construct( Registry $registry, $id=0 ) { $this->registry = $registry; $this->id = $id; if( $this->id > 0 ) { $sql = "SELECT m.*, DATE_FORMAT(m.sent, '%D %M %Y') as sent_friendly, psender.name as sender_name, precipient.name as recipient_name FROM messages m, profile psender, profile precipient WHERE precipient.user_id=m.recipient AND psender.user_id=m.sender AND m.ID=" . $this->id; $this->registry->getObject('db')->executeQuery( $sql ); if( $this->registry->getObject('db')->numRows() > 0 ) { $data = $this->registry->getObject('db')->getRows(); $this->sender = $data['sender']; $this->recipient = $data['recipient']; $this->sent = $data['sent']; $this->read = $data['read']; $this->subject = $data['subject']; $this->message = $data['message']; $this->sentFriendlyTime = $data['sent_friendly']; $this->senderName = $data['sender_name']; $this->recipientName = $data['recipient_name']; } else { $this->id = 0; } } } Next, we have setter methods for most of the class variables: /** * Set the sender of the message * @param int $sender * @return void */ public function setSender( $sender ) { $this->sender = $sender; } /** * Set the recipient of the message * @param int $recipient * @return void */ public function setRecipient( $recipient ) { $this->recipient = $recipient; } /** * Set the subject of the message * @param String $subject * @return void */ public function setSubject( $subject ) { $this->subject = $subject; } /** * Set if the message has been read * @param boolean $read * @return void */ public function setRead( $read ) { $this->read = $read; } /** * Set the message itself * @param String $message * @return void */ public function setMessage( $message ) { $this->message = $message; } The save method takes the class variables that directly relate to the messages table in the database and either inserts them as a new record, or updates the existing record: /** * Save the message into the database * @return void */ public function save() { if( $this->id > 0 ) { $update = array(); $update['sender'] = $this->sender; $update['recipient'] = $this->recipient; $update['read'] = $this->read; $update['subject'] = $this->subject; $update['message'] = $this->message; $this->registry->getObject('db')->updateRecords( 'messages', $update, 'ID=' . $this->id ); } else { $insert = array(); $insert['sender'] = $this->sender; $insert['recipient'] = $this->recipient; $insert['read'] = $this->read; $insert['subject'] = $this->subject; $insert['message'] = $this->message; $this->registry->getObject('db')->insertRecords( 'messages', $insert ); $this->id = $this->registry->getObject('db')->lastInsertID(); } } One getter method that we need, is to return the user ID of the recipient, so we can check that the currently logged in user has permission to read the message: /** * Get the recipient of the message * @return int */ public function getRecipient() { return $this->recipient; } We should also provide a method to delete the message from the database, should the user wish to delete a message: /** * Delete the current message * @return boolean */ public function delete() { $sql = "DELETE FROM messages WHERE ID=" . $this->id; $this->registry->getObject('db')->executeQuery( $sql ); if( $this->registry->getObject('db')->affectedRows() > 0 ) { $this->id =0; return true; } else { return false; } } Finally, we have a toTags method, which converts all of the non-object and non-array variables into template tags, so when we create a view message method in the controller, we simply need to construct the message object and call the toTags method: /** * Convert the message data to template tags * @param String $prefix prefix for the template tags * @return void */ public function toTags( $prefix='' ) { foreach( $this as $field => $data ) { if( ! is_object( $data ) && ! is_array( $data ) ) { $this->registry->getObject('template')->getPage()->addTag( $prefix.$field, $data ); } } } } ?> Messages model Similar to how we have a model for representing a single relationship and another for representing a number of relationships, we also need a model to represent a number of messages within the site. This is to handle the lookup of a user's private message inbox. <?php /** * Messages model */ class Messages { /** * Messages constructor * @param Registry $registry * @return void */ public function __construct( Registry $registry ) { $this->registry = $registry; } /** * Get a users inbox * @param int $user the user * @return int the cache of messages */ public function getInbox( $user ) { $sql = "SELECT IF(m.read=0,'unread','read') as read_style, m.subject, m.ID, m.sender, m.recipient, DATE_FORMAT(m.sent, '%D %M %Y') as sent_friendly, psender.name as sender_name FROM messages m, profile psender WHERE psender.user_id=m.sender AND m.recipient=" . $user . " ORDER BY m.ID DESC"; $cache = $this->registry->getObject('db')->cacheQuery( $sql ); return $cache; } } ?>
Read more
  • 0
  • 1
  • 4738

article-image-article-nesting-extend-placeholders-and-mixins
Packt
05 Jun 2013
9 min read
Save for later

Nesting, Extend, Placeholders, and Mixins

Packt
05 Jun 2013
9 min read
(For more resources related to this topic, see here.) Styling a site with Sass and Compass Personally, while abstract examples are fine, I find it difficult to fully understand concepts until I put them into practice. With that in mind, at this point I'm going to amend the markup in our project's index.html file, so we have some markup to flex our growing Sass and Compass muscles on. We're going to build up a basic home page for this book. Hopefully, the basic layout and structure will be common enough to be of help. If you want to see how things turn out, you can point your browser at http://sassandcompass.com, as that's what we'll be building. The markup for the home page is fairly simple. It consists of a header with links, navigation made up of list items, images, and content and a footer area. Pasting the home page markup here will span a few pages (and be extremely dull to read. Don't get too hung up on the specifics of the markup. Let me be clear. The actual code, selectors used, and the finished webpage that we'll create are not important. The Sass and Compass techniques and tools we use to create them are. You can download the code from the book's page at http://packtpub.com and also from http://sassandcompass.com You can download the code from the book's page at http:// packtpub.com and also from http://sassandcompass.com At this point, here's how the page looks in the browser: Image Wow, what a looker! Thankfully, with Sass and Compass we're going to knock this into shape in no time at all. Notice that in the source code there are semantically named classes in the markup. Some people dislike this practice, but I have found that it makes it easier to create more modular and maintainable code. A full discussion on the reasoning behind using classes against styling elements themselves is a little beyond the scope of this book. However, a good book on this topic is SMACSS by Jonathan Snook (http:// smacss.com). It's not essential to adhere slavishly to the conventions he describes, but it's a great start in thinking about how to structure and write style sheets in general. First, let's open the _normalize.scss partial file and amend the default styles for links (changing the default text-underline to a dotted bottom border) and remove the padding and margin for the ul tags. Now I think about this, before we get knee-deep in code, a little more organization is called for. Separating the layout from visuals Before getting into nesting, @extend, placeholders, and mixins, it makes sense to create some partial files for organizing the styles. Rather than create a partial Sass file for each structural area (the header, footer, and navigation), and lump all the visual styles relevant inside, we can structure the code in a slightly more abstract manner. There is no right or wrong way to split up Sass files. However, it's worth looking at mature and respected projects such as Twitter Bootstrap (https://github.com/twitter/ bootstrap) and Foundation from Zurb (https://github.com/zurb/foundation) to see how they organize their code. We'll create a partial file called _base.scss that will contain some base styles: code Then a partial file called _layout.scss. That will only contain rules pertaining to visual layout and positioning of the main page areas (the previously mentioned header, footer, aside, section, and navigation). Here are the initial styles being added into the _layout.scss partial file: code There is nothing particular to Sass there, it's all standard CSS. Debug help in the browser Thanks to the popularity of Sass there are now experimental features in browsers to make debugging Sass even easier. When inspecting an element with developer tools, the source file and line number is provided, making it easier to find the offending selector. For Chrome, here's a step-by-step explanation: http://benfra.in/1z1 Alternatively, if using Firefox, check out the FireSass extension: https://addons.mozilla.org/en-us/firefox/addon/ firesass-for-firebug/ Let's create another partial file called _modules.scss. This will contain all the modular pieces of code. This means, should we need to move the .testimonial section (which should be a modular section) in the source code, it's not necessary to move it from one partial file to another (which would be the case if the partial files were named according to their existing layout). The _module.scss file will probably end up being the largest file in the Sass project, as it will contain all the aesthetics for the items on the page. However, the hope is that these modular pieces of code will be flexible enough to look good, regardless of the structural constraints defined in the _layout.scss partial file. If working on a very large project, it may be worth splitting the modules into sub-folders. Just create a sub-folder and import the file. For example, if there was a partial called _callouts.scss in a sub-folder called modules, it could be imported using the following code: code Here is the contents of the amended styles.scss file: code A quick look at the HTML in the browser shows the basic layout styles applied: Image With that done, let's look at how Sass's ability to nest styles can help us make modules of code. What nesting is and how it facilitates modules of code Nesting provides the ability to nest styles within one another. This provides a handy way to write mini blocks of modular code. Nesting syntax With a normal CSS style declaration, there is a selector, then an opening curly brace, and then any number of property and value pairs, followed by a closing curly brace. For example: code Where Sass differs is that before the closing curly brace, you can nest another rule within. What do I mean by that? Consider this example: code In the previous code, a color has been set for the link tag using a variable. Then the hover and focus state have been nested within the main anchor style with a different color set. Furthermore, styles for the visited and active state have been nested with an alternate color defined. To reference a variable, simply write the dollar sign ( $) and then the name of the variable, for example, $variable. When compiled, this produces the following CSS: code Notice that Sass is also smart enough to know that the hex value #7FFF00 is actually the color called chartreuse, and when it has generated the CSS it has converted it to the CSS color name. How can this be used practically? Let's look at the markup for the list of navigation links on the home page. Each currently looks like this: code I've omitted the additional list items and closing tags in the previous code for the sake of brevity. From a structure point of view, each list item contains an anchor link with a <b> tag and a <span> tag within. Here's the initial Sass nested styles that have been added into the _modules.scss partial to control that section: code Notice that the rules are being nested inside the class .chapter-summary. That's because it then limits the scope of the nested styles to only apply to elements within a list item with a class of .chapter-summary. However, remember that if the styles can be re-used elsewhere, it isn't necessarily the best practice to nest them, as they may become too specific. The nesting of the selectors in this manner partially mirrors the structure of the HTML, so it's easy to see that the styles nested within only apply to those that are similarly structured in the markup. We're starting with the outermost element that needs to be styled (the <li class=”chapter-summary”>) and then nesting the first element within that. In this instance, it's the anchor tag. Then further styles have been nested for the hover and visited states along with the <b> and <span> tags. I tend to add focus and active states at the end of a project, but that's a personal thing. If you're likely to forget them, by all means add them at the same time you add hover. Here's the generated CSS from that block: code The nested styles are generated with a degree of specificity that limits their scope; they will only apply to elements within a <li class=”chapter-summary”>. It's possible to nest all manner of styles within one another. Classes, IDs, pseudo classes; Sass doesn't much care. For example, let's suppose the <li> elements need to have a dotted border, except for the last one. Let's take advantage of the last-child CSS pseudo class in combination with nesting and add this section: code The parent selector Notice that any pseudo selector that needs to be nested in Sass is prefixed with the ampersand symbol (&), then a colon (:). The ampersand symbol has a special meaning when nesting. It always references the parent selector. It's perfect for defining pseudo classes as it effectively says 'the parent plus this pseudo element'. However, there are other ways it can work too, for example, nesting a few related selectors and expressing different relationships between them: code This actually results in the following CSS: code We have effectively reversed the selectors while nesting by using the & parent selector. Chaining selectors It's also possible to create chained selectors using the parent selector. Consider this: code That will compile to this: code That will only select an element that has both HTML classes, namely selector-one and selector-two. The parent selector is perfect for pseudo selectors and at odd times its necessary to chain selectors, but beyond that I couldn't find much practical use for it, until I saw this set of slides by Sass guru Brandon Mathis http://speakerdeck.com/u/imathis/p/sass-compass-the-future-of-stylesheets-now. In this, he illustrates how to use the parent selector for nesting a Modernizr relevant rule.
Read more
  • 0
  • 0
  • 4736

article-image-cms-made-simple-application-user-defined-tags
Packt
10 May 2011
9 min read
Save for later

CMS Made Simple: Application of User-Defined Tags

Packt
10 May 2011
9 min read
CMS Made Simple Development Cookbook Over 70 simple but incredibly effective recipes for extending CMS Made Simple with detailed explanations – useful for beginners and experts alike! These recipes have been specially selected to demonstrate capabilities. While they may be useful as provided, they are designed to show the range of what can be done with tags, and to provide examples of useful operations such as setting Smarty variables, interfacing with the database, interacting with modules, and consuming data from remote web services. You are encouraged to use these recipes as starting points for your own projects. Introduction One of the most popular uses of CMS Made Simple involves an administrator who creates a basic site structure and template, and one or more editors who maintain page content without having to worry about the site layout. In this configuration, you can think of the CMS essentially as a big template engine with some utilities to add content. Each page of the site comprises a page template with areas that get populated with various kinds of content. These areas are either tags or variables, where the distinction is that a tag involves the execution of code, while a variable is a simple substitution of values. The CMS comes with a large assortment of built-in tags, including the fundamental "content" tag and a variety of useful supporting tags for building navigation, links, including images, and so forth. Because CMS Made Simple can be seen as being built around tags, it is not surprising that there are facilities for creating your own tags. You can create tags directly through the Admin panel—these are called User-Defined Tags or UDTs. If you want even more extensive capabilities in your tag, you can have those capabilities by creating a file that follows a few basic conventions. The template engine used by CMS Made Simple comes from the Smarty project. Pages are Smarty templates, and the built-in CMS tag types are all Smarty tags. Behind the scenes, both UDTs and file-based tags also become Smarty tags. For additional information on Smarty, the documentation provided by the Smarty project is invaluable. You can read more at http://www.smarty.net/docsv2/en/ Displaying the User's IP address from a User-Defined Tag This recipe demonstrates creating a very simple User-Defined Tag, and using it to display information, that it gets from PHP globally-defined variables. Getting ready For this recipe, you will need to have CMS Made Simple installed and working. You will need login access to the site's Administration area as a member of the "admin" group, or as a member of a group with permissions to "Modify User-Defined Tags" and "Manage All Content". How to do it... Log in to your CMS Administration area. Using the top menu, go to "Extensions" and click on "User Defined Tags". Click on the "Add User Defined Tag" button. In the "Name" field type "user_ip". In the "Code" text area, type the following code: echo 'Welcome user from '.$_SERVER['REMOTE_ADDR'].' -- enjoy your visit!'; Click on the "Submit" button. Using the top menu, go to "Content" and click on "Pages". Click on the "Add New Content" button. In the "Title" and "Menu Text" fields, type "Tag Test". In the "Content" text area, type the following code: {user_ip} Click on the "Submit" button. View your site from the user side. Click on the new "Tag Test" page. How it works... User-Defined Tags work by linking a piece of PHP code to a tag which is recognized by Smarty. When Smarty parses a template and encounters that tag, it executes the code in question, and substitutes the tag markup with any output from that PHP code. When we create a UDT in the CMS Made Simple admin area, the name we enter into the text field becomes the tag which Smarty will look for, and the code we enter into the text area is the PHP associated with the tag. This recipe is an example of outputting text directly from a User-Defined Tag. It works by echoing a string, that then replaces the {user_ip} tag in the content. This substitution of the tag can take place either in a page template or in the page content, which might seem a little confusing since it's stated earlier that Smarty processes tags that it finds in templates. The confusion, though, is mostly due to terminology; CMS Made Simple processes both layout templates and page content through Smarty, so while it is being processed, page content serves as a template from Smarty's perspective. The User-Defined Tag we create accesses one of the PHP "superglobals"—the $_SERVER variable . This variable contains information about the HTTP request and some environmental information about the PHP server. In this case, we're getting the user's IP address information to display. Using the CmsObject and the current content object in a User-Defined Tag This recipe shows you how to access the Page Content object via the CmsObject in a User-Defined Tag. The primary technique that it demonstrates—getting a reference to the CmsObject and using the CmsObject to get a reference to other important CMS runtime objects—is one that you will use to solve many different kinds of problems. In this recipe, we use the CmsObject to get a reference to the Page Content object. Once we have a reference to this object, we can call upon its methods to report all kinds of interesting information about the current page content. Getting ready For this recipe, you will need login access to your site's Administration area with permission to "Modify User-Defined Tags" and "Manage All Content". How to do it... Log in to your CMS Administration area. Using the top menu, go to "Extensions" and click on "User Defined Tags". Click on the "Add User Defined Tag" icon. In the "Name" field, type "content_description". In the "Code" field, type the following code: $gCms = cmsms(); $contentops = $gCms->GetContentOperations(); $content_obj = $contentops->getContentObject(); echo 'Current content object is "'.$content_obj->Name().'"<br />'; echo 'which was created on '.$content_obj->GetCreationDate().'<br />'; echo 'its alias is "'.$content_obj->Alias().'"<br />'; echo 'and its URL is "'.$content_obj->GetURL().'"<br />'; Click on the "Submit" button. Using the top menu, select "Content" and click on "Pages". Click on the "Add New Content" button. Enter "New Content Page" into the "Title" and "Menu Text" fields. In the "Content" field, enter: {content_description} Click on the "Submit" button. View your site from the user side. Click on the new "Content Page" page. How it works... CMS Made Simple uses an internal object called "CmsObject" to keep track of a great deal of what it needs to serve content. This information includes the site hierarchy, the installed and active modules, references to the database, and more. The CmsObject is available to Modules, Tags, and User-Defined Tags via the cmsms() function. In this recipe, we want to display information from the current Content object. However, this object is not directly accessible. To get access to this object, we need to make a static call to the ContentOperations object, but this object is also not directly accessible from our code. To get access to these objects, we need to get references to them. We start by gaining access to the CmsObject by calling the global cmsms() function. From our reference to the CmsObject, we request a reference to the ContentOperations object, that we then use to request a reference to the current content object. Once we have the current content object, we can call assorted methods on it to retrieve information about the current content. In this recipe, we get the content's name, when it was last modified, its alias, and its URL. Since this recipe is an example, we don't format these various pieces of information or use a template, but simply echo them. In a production UDT or one in which there's any complexity to what is being displayed, it would be more advisable to use the values to populate Smarty variables for later display, or use a template for formatting them. There's more... There are a number of interesting objects, that CmsObject manages. References can be requested to: BookmarkOperations— used in the Admin area ContentOperations—which has methods to handle Content and content metadata Db— which wraps the ADODB database API GlobalContentOperations— which has methods to handle Global Content Blocks GroupOperations— which has methods to handle Admin Groups HierarchyManager— which has methods for handling site content hierarchies ModuleOperations— which has methods for managing Modules Smarty— which wraps the Smarty template engine StylesheetOperations—which has methods for handling TemplateOperations— which has methods for handling Smarty templates UserOperations— which has methods for handling Admin Users UserTagOperations— which has methods for handling User-Defined Tags Getting attributes using page_attr If you're looking to display information about the current content object, CMS Made Simple has a custom Smarty tag that provides some attributes: {page_attr}. You can use this tag to display page attribute information—simply by specifying a key to the information you're requesting. For example, to get the page's "image" attribute, you could use the tag {page_attr key="image"}. Other keys that are supported include: target image thumbnail extra1 extra2 extra3 searchable pagedata disable_wysiwyg content_en As you can see, this is not as rich a collection of information as you can get from the content object directly using your own UDT. Old code and the use of globals If you look at old modules or User-Defined Tags that have been posted in the CMS Made Simple forum, you may see the following code: global $gCms; instead of the $gCms = cmsms() used here. For a long time, the use of the global declaration was the preferred way to access the CmsObject. This approach is now deprecated for a number of reasons, most important of which is a security issue and also including portability of code between CMS Made Simple 1.x and the upcoming 2.x versions. While use of the global declaration to access the CmsObject will likely still work, it is strongly discouraged.  
Read more
  • 0
  • 0
  • 4733
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-templates-web-pages
Packt
13 Oct 2015
13 min read
Save for later

Templates for Web Pages

Packt
13 Oct 2015
13 min read
In this article, by Kai Nacke, author of the book D Web Development, we will learn that every website has some recurring elements, often called a theme. Templates are an easy way to define these elements only once and then reuse them. A template engine is included in vibe.dwith the so-called Diet templates. The template syntax is based on the Jade templates (http://jade-lang.com/), which you might already know about. In this article, you will learn the following: Why are templates useful Key concepts of Diet templates: inheritance, include and blocks How to use filters and how to create your own filter (For more resources related to this topic, see here.) Using templates Let's take a look at the simple HTML5 page with a header, footer, navigation bar and some content in the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Demo site</title> <link rel="stylesheet" type="text/css" href="demo.css" /> </head> <body> <header> Header </header> <nav> <ul> <li><a href="link1">Link 1</a></li> <li><a href="link2">Link 2</a></li> <li><a href="link3">Link 3</a></li> </ul> </nav> <article> <h1>Title</h1> <p>Some content here.</p> </article> <footer> Footer </footer> </body> </html> The formatting is done with a CSS file, as shown in the following: body { font-size: 1em; color: black; background-color: white; font-family: Arial; } header { display: block; font-size: 200%; font-weight: bolder; text-align: center; } footer { clear: both; display: block; text-align: center; } nav { display: block; float: left; width: 25%; } article { display: block; float: left; } Despite being simple, this page has elements that you often find on websites. If you create a website with more than one page, then you will use this structure on every page in order to provide a consistent user interface. From the 2nd page, you will violate the Don't Repeat Yourself(DRY) principle: the header and footer are the elements with fixed content. The content of the navigation bar is also fixed but not every item is always displayed. Only the real content of the page (in the article block) changes with every page. Templates solve this problem. A common approach is to define a base template with the structure. For each page, you will define a template that inherits from the base template and adds the new content. Creating your first template In the following sections, you will create a Diet template from the HTML page using different techniques. Turning the HTML page into a Diet template Let's start with a one-to-one translation of the HTML page into a Diet template. The syntax is based on the Jade templates. It looks similar to the following: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body header | Header nav ul li a(href='link1') Link 1 li a(href='link2') Link 2 li a(href='link3') Link 3 article h1 Title p Some content here. footer | Footer The template resembles the HTML page. Here are the basic syntax rules for a template: The first word on a line is an HTML tag Attributes of an HTML tag are written as a comma-separated list surrounded by parenthesis A tag may be followed by plain text that may contain the HTML code Plain text on a new line starts with the pipe symbol Nesting of elements is done by increasing the indentation. If you want to see the result of this template, save the code as index.dt and put it together with the demo.css CSS file in the views folder. The Jade templates have a special syntax for the nested elements. The list item/anchor pair from the preceding code could be written in one line, as follows: li: a(href='link1') Link1 This syntax is currently not supported by vibe.d. Now, you need to create a small application to see the result of the template by following the given steps: Create a new project template with dub, using the following command: $ dub init template vibe.d Save the template as the views/index.dt file. Copy the demo.css CSS file in the public folder. Change the generated source/app.d application to the following: import vibe.d; shared static this() { auto router = new URLRouter; router.get("/", staticTemplate!"index.dt"); router.get("*", serveStaticFiles("public/")); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, router); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Run dub inside the project folder to start the application and then browse to http://127.0.0.1:8080/ to see the resulting page. The application uses a new URLRouter class. This class is used to map a URL to a web page. With the router.get("/", staticTemplate!"index.dt");statement, every request for the base URL is responded with rendering of the index.dt template. The router.get("*", serveStaticFiles("public/")); statement uses a wild card to serve all other requests as static files that are stored in the public folder. Adding inheritance Up to now, the template is only a one-to-one translation of the HTML page. The next step is to split the file into two, layout.dt and index.dt. The layout.dtfile defines the general structure of a page while index.dt inherits from this file and adds new content. The key to template inheritance is the definition of a block. A block has a name and contains some template code. A child template may replace the block, append, or prepend the content to a block. In the following layout.dt file, four blocks are defined: header, navigation, content and footer. For all the blocks, except content, a default text is defined, as follows: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body block header header Header block navigation nav ul li <a href="link1">Link 1</a> li <a href="link2">Link 2</a> li <a href="link3">Link 3</a> block content block footer footer Footer The template in the index.dt file inherits this layout and replaces the block content, as shown here: extends layout block content article h1 Title p Some content here. You can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. You can now add more pages and reuse the layout. It is also possible to change the common elements that you defined in the header, footer and navigation blocks. There is no restriction on the level of inheritance. This allows you to construct very sophisticated template systems. Using include Inheritance is not the only way to avoid repetition of template code. With the include keyword, you insert the content of another file. This allows you to put the reusable template code in separate files. As an example, just put the following navigation in a separate navigation.dtfile: nav     ul         li <a href="link1">Link 1</a>         li <a href="link2">Link 2</a>         li <a href="link3">Link 3</a> The index.dt file uses the include keyword to insert the navigation.dt file, as follows: doctype html html     head        meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body         header Header         include navigation         article             h1 Title             p Some content here.         footer Footer Just as with the inheritance example, you can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. The Jade templates allow you to apply a filter to the included content. This is not yet implemented. Integrating other languages with blocks and filters So far, the templates only used the HTML content. However, a web application usually builds on a bunch of languages, most often integrated in a single document, as follows: CSS styles inside the style element JavaScript code inside the script element Content in a simplified markup language such as Markdown Diet templates have two mechanisms that are used for integration of other languages. If a tag is followed by a dot, then the block is treated as plain text. For example, the following template code: p. Some text And some more text It translates into the following: <p>     Some text     And some more text </p> The same can also be used for scripts and styles. For example, you can use the following script tag with the JavaScript code in it: script(type='text/javascript').     console.log('D is awesome') It translates to the following: <script type="text/javascript"> console.log('D is awesome') </script> An alternative is to use a filter. You specify a filter with a colon that is followed by the filter name. The script example can be written with a filter, as shown in the following: :javascript     console.log('D is awesome') This is translated to the following: <script type="text/javascript">     //<![CDATA[     console.log('D is aewsome')     //]]> </script> The following filters are provided by vibe.d: javascript for JavaScript code css for CSS styles markdown for content written in Markdown syntax htmlescapeto escape HTML symbols The css filter works in the same way as the javascript filter. The markdown filter accepts the text written in the Markdown syntax and translates it into HTML. Markdown is a simplified markup language for web authors. The syntax is available on the internet at http://daringfireball.net/projects/markdown/syntax. Here is our template, this time using the markdown filter for the navigation and the article content: doctype html html     head         meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body        header Header         nav             :markdown                 - [Link 1](link1)                 - [Link 2](link2)                 - [Link 3](link3)         article             :markdown                 Title                 =====                 Some content here.         footer Footer The rendered HTML page is still the same. The advantage is that you have less to type, which is good if you produce a lot of content. The disadvantage is that you have to remember yet another syntax. A normal plain text block can contain HTML tags, as follows: p. Click this <a href="link">link</a> This is rendered as the following: <p>     Click this <a href="link">link</a> </p> There are situations where you want to even treat the HTML tags as plain text, for example, if you want to explain HTML syntax. In this case, you use the htmlescape filter, as follows: p     :htlmescape         Link syntax: <a href="url target">text to display</a> This is rendered as the following: <p>     Link syntax: &lt;a href="url target"&gt;text to display&lt;/a&gt; </p> You can also add your owns filters. The registerDietTextFilter()function is provided by vibe.d to register the new filters. This function takes the name of the filter and a pointer to the filter function. The filter function is called with the text to filter and the indentation level. It returns the filtered text. For example, you can use this functionality for pretty printing of D code, as follows: Create a new project with dub,using the following command: $ dub init filter vibe.d Create the index.dttemplate file in the viewsfolder. Use the new dcodefilter to format the D code, as shown in the following: doctype html head title Code filter example :css .keyword { color: #0000ff; font-weight: bold; } body p You can create your own functions. :dcode T max(T)(T a, T b) { if (a > b) return a; return b; } Implement the filter function in the app.dfile in the sourcefolder. The filter function outputs the text inside a <pre> tag. Identified keywords are put inside the <span class="keyword"> element to allow custom formatting. The whole application is as follows: import vibe.d; string filterDCode(string text, size_t indent) { import std.regex; import std.array; auto dst = appender!string; filterHTMLEscape(dst, text, HTMLEscapeFlags.escapeQuotes); auto regex = regex(r"(^|s)(if|return)(;|s)"); text = replaceAll(dst.data, regex, "$1<span class="keyword">$2</span>$3"); auto lines = splitLines(text); string indent_string = "n"; while (indent-- > 0) indent_string ~= "t"; string ret = indent_string ~ "<pre>"; foreach (ln; lines) ret ~= indent_string ~ ln; ret ~= indent_string ~ "</pre>"; return ret; } shared static this() { registerDietTextFilter("dcode", &filterDCode); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, staticTemplate!"index.dt"); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Compile and run this application to see that the keywords are bold and blue. Summary In this article, we have seen how to create a Diet template using different techniques such as translating the HTML page into a Diet template, adding inheritance, using include, and integrating other languages with blocks and filters. Resources for Article: Further resources on this subject: MODx Web Development: Creating Lists [Article] MODx 2.0: Web Development Basics [Article] Ruby with MongoDB for Web Development [Article]
Read more
  • 0
  • 0
  • 4721

article-image-liferay-its-installation-and-setup
Packt
15 Apr 2013
7 min read
Save for later

Liferay, its Installation and setup

Packt
15 Apr 2013
7 min read
(For more resources related to this topic, see here.) Overview about portals Well, to understand more about what portals are, let me throw some familiar words at you. Have you used, heard, or seen iGoogle, the Yahoo! home page, or MSN? If the answer is yes, then you have been using portals already. All these websites have two things in common. A common dashboard Information from various sources shown on a single page, giving a uniform experience For example, on iGoogle, you can have a gadget showing the weather in Chicago, another gadget to play your favorite game of Sudoku, and a third one to read news from around the globe, everything on the same page without you knowing that all of these are served from different websites! That is what a portal is all about. So, a portal (or web portal) can be thought of as a website that shows, presents, displays, or brings together information or data from various sources and gives the user a uniform browsing experience. The small chunks of information that form the web page are given different names such as gadgets or widgets, portlets or dashlets. Introduction to Liferay Now that you have some basic idea about what portals are, let us revisit the initial statement I made about Liferay. Liferay is an open source portal solution. If you want to create a portal, you can use Liferay to do this. It is written in Java. It is an open source solution, which means the source code is freely available to everyone and people can modify and distribute it. With Liferay you can create basic intranet sites with minimal tweaking. You can also go for a full-fledged enterprise banking portal website with programming, and heavy customizations and integrations. Besides the powerful portal capabilities, Liferay also provides the following: Awesome enterprise and web content management capabilities Robust document management which supports protocols such as CMIS and WebDAV Good social collaboration features Liferay is backed up by a solid and active community, whose members are ever eager to help. Sounds good? So what are we waiting for? Let's take a look at Liferay and its features. Installation and setup In four easy steps, you can install Liferay and run it on your system. Step 1 – Prerequisites Before we go and start our Liferay download, we need to check if we have the requirements for the installation. They are as follows: Memory: 2 GB (minimum), 4 GB (recommended). Disk space: Around 5 GB of free space should be more than enough for the exercises mentioned in the book. The exercises performed in this book are done on Windows XP. So you can use the same or any subsequent versions of Windows OS. Although Liferay can be run on Mac OSX and Linux, it is beyond the scope of this book how to set up Liferay on them. The MySQL database should be installed. As with the OS, Liferay can be run on most of the major databases out there in the market. Liferay is shipped with the Hypersonic database by default for demo purpose, which should not be used for a production environment. Unzip tools such as gzip or 7-Zip. Step 2 – Downloading Liferay You can download the latest stable version of Liferay from https://www.liferay.com/downloads/liferay-portal/available-releases. Liferay comes in the following two versions: Enterprise Edition: This version is not free and you would have to purchase it. This version has undergone rigorous testing cycles to make sure that all the features are bug free, providing the necessary support and patches. Community Edition: This is a free downloadable version that has all the features but no enterprise support provided. Liferay is supported by a lot of open source application servers and the folks at Liferay have made it easy for end users by packaging everything as a bundle. What this means is that if you are asked to have Liferay installed in a JBoss application server, you can just go to the URL previously mentioned and select the Liferay-JBoss bundle to download, which gives you the JBoss Application server installed with Liferay. We will download the Community Edition of the Liferay-Tomcat bundle, which has Liferay preinstalled in the Tomcat server. The stable version at the time of writing this book was Liferay 6.1 GA2. As shown in the following screenshot, just click on Download after making sure that you have selected Liferay bundled with Tomcat and save the ZIP file at an appropriate location: Step 3 – Starting the server After you have downloaded the bundle, extract it to the location of your choice on your machine. You can see a folder named liferay-portal-6.1.1-ce-ga2. The latter part of the name can change based on the version that you download. Let us take a moment to have a look at the folder structure as shown in the following screenshot: The liferay-portal-6.1.1-ce-ga2 folder is what we will refer to as LIFERAY_HOME. This folder contains the server, which in our case is tomcat-7.0.27. Let's refer to this folder as SERVER_HOME. Liferay is created using Java, so to run Liferay we need Java Runtime Environment (JRE). The Liferay bundle is shipped with a JRE by default (as you can see inside our SERVER_HOME). So if you are running a Windows OS, you can directly start and run Liferay. If you are using any other OS, you need to set the JAVA_HOME environment variable. Navigate to SERVER_HOME/webapps. This is where all the web applications are deployed. Delete everything in this folder except marketplace-portlet and ROOT. Now go to SERVER/bin and double-click on startup.bat, since we are using Windows OS. This will bring up a console showing the server startup. Wait till you see the Server Startup message in the console, after which you can access Liferay from the browser. Step 4 – Doing necessary first-time configurations Once the server is up, open your favorite browser and type in http://localhost:8080. You will be shown a screen that performs basic configurations, such as changing the database and name of your portal, deciding what should be the admin name and e-mail address, or changing the default locale. This is a new feature introduced in Liferay 6.1 to ease the first-time setup, which on previous versions had to be done using the property file. Go change the name of the portal, administrator username, and e-mail address. Keep the locale as it is. As I stated earlier, Liferay is shipped with a default Hypersonic database which is normally used for demo purposes. You can change it to MySQL if you want, by selecting the database type from the drop-down list presented, and typing in the necessary JDBC details. I have created a database in MySQL by the name Portal Starter; hence my JDBC URL would contain that. You can create a blank database in MySQL and accordingly change the JDBC URL. Once you are done making your changes, click on the Finish Configuration button as shown in the following screenshot: This will open up a screen, which will show the path where this configuration is saved. What Liferay does behind the scenes is creates a property file named portal-setup-wizard.properties and put all the configurations in that. This, as I said earlier, was created manually in the previous versions of Liferay. Clicking on the Go to my portal button on this screen will take the user to the Terms of Use page. Agree to the terms and proceed further. A screen will be shown to change the password for your admin user that you earlier specified in the Basic Configuration screen. After you change the password, you will be presented with a screen to select a password reminder question. Select a question or create your own question from the drop-down list, set the password reminder, and move on. And that's it!! Finally, you can see the home page of Liferay. That's it and you are done setting up your very first Liferay instance. Summary So, we just gained a quick understanding about portals and Liferay and its installation and setup that teaches you how to set up Liferay on your local machine. Resources for Article : Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Setting up and Configuring a Liferay Portal [Article] User Interface in Production [Article]
Read more
  • 0
  • 0
  • 4718

article-image-oracle-goldengate-advanced-administration-tasks-i
Packt
20 Sep 2013
19 min read
Save for later

Oracle GoldenGate- Advanced Administration Tasks - I

Packt
20 Sep 2013
19 min read
(For more resources related to this topic, see here.) Upgrading Oracle GoldenGate binaries In this recipe you will learn how to upgrade GoldenGate binaries. You will also learn about GoldenGate patches and how to apply them. Getting ready For this recipe, we will upgrade the GoldenGate binaries from version 11.2.1.0.1 to 11.2.1.0.3 on the source system, that is prim1-ol6-112 in our case. Both of these binaries are available from the Oracle Edelivery website under the part number V32400-01 and V34339-01 respectively. 11.2.1.0.1 binaries are installed under /u01/app/ggate/112101. How to do it... The steps to upgrade the Oracle GoldenGate binaries are: Make a new directory for 11.2.1.0.3 binaries: mkdir /u01/app/ggate/112103 Copy the binaries ZIP file to the server in the new directory. Unzip the binaries file: [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112103 [ggate@prim1-ol6-112 112103]$ unzip V34339-01.zip Archive: V34339-01.zip inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar inflating: Oracle_GoldenGate_11.2.1.0.3_README.doc inflating: Oracle GoldenGate_11.2.1.0.3_README.txt inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.3.pdf Install the new binaries in /u01/app/ggate/112103: [ggate@prim1-ol6-112 112103]$ tar -pxvf fbo_ggs_Linux_x64_ora11g_64bit.tar Stop the processes in the existing installation: [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112101 [ggate@prim1-ol6-112 112101]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO Linux, x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> stop * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Stop the manager process: GGSCI (prim1-ol6-112.localdomain) 2> STOP MGRManager process is required by other GGS processes.Are you sure you want to stop it (y/n)? ySending STOP request to MANAGER ...Request processed.Manager stopped. Copy the subdirectories to the new binaries: [ggate@prim1-ol6-112 112101]$ cp -R dirprm /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirrpt /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirchk /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R BR /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirpcs /u01/app/ggate/112103/ [ggate@prim1-ol6-112 112101]$ cp -R dirdef /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirout /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirdat /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirtmp /u01/app/ggate/112103/ Modify any parameter files under dirprm if you have hardcoded old binaries path in them. Edit the ggate user profile and update the value of the GoldenGate binaries home: vi .profile export GG_HOME=/u01/app/ggate/112103 Start the manager process from the new binaries: [ggate@prim1-ol6-112 ~]$ cd /u01/app/ggate/112103/ [ggate@prim1-ol6-112 112103]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> START MGR Manager started. Start the processes: GGSCI (prim1-ol6-112.localdomain) 18> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting How it works... The method to upgrade the GoldenGate binaries is quite straightforward. As seen in the preceding section, you need to download and install the binaries on the server in a new directory. After this, you would stop the all GoldenGate processes that are running from the existing binaries. Then you would copy all the important GoldenGate directories with parameter files, trail files, report files, checkpoint files, and recovery files to the new binaries. If your trail files are kept on a separate filesystem which is linked to the dirdat directory using a softlink, then you would just need to create a new softlink under the new GoldenGate binaries home. Once all the files are copied, you would need to modify the parameter files if you have the path of the existing binaries hardcoded in them. The same would also need to be done in the OS profile of the ggate user. After this, you just start the manager process and rest of the processes from the new home. GoldenGate patches are all delivered as full binaries sets. This makes the procedure to patch the binaries exactly the same as performing major release upgrades. Table structure changes in GoldenGate environments with similar table definitions Almost all of the applications systems in IT undergo some change over a period of time. This change might include a fix of an identified bug, an enhancement or some configuration change required due to change in any other part of the system. The data that you would replicate using GoldenGate will most likely be part of some application schema. These schemas, just like the application software, sometimes require some changes which are driven by the application vendor. If you are replicating DDL along with DML in your environment then these schema changes will most likely be replicated by GoldenGate itself. However, if you are only replicating only DML and there are any DDL changes in the schema particularly around the tables that you are replicating, then these will affect the replication and might even break it. In this recipe, you will learn how to update the GoldenGate configuration to accommodate the schema changes that are done to the source system. This recipe assumes that the definitions of the tables that are replicated are similar in both the source and target databases. Getting ready For this recipe we are making the following assumptions: GoldenGate is set up to replicate only DML changes between the source and target environments. The application will be stopped for making schema changes in the source environment. The table structures in the source and target database are similar. The replication is configured for all objects owned by a SCOTT user using a SCOTT.* clause. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database. The schema changes performed in this recipe are as follows: Add a new column called DOB (DATE) to the EMP table. Modify the DNAME column in the DEPT table to VARCHAR(20). Add a new table called ITEMS to the SCOTT schema: ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20) Add a new table called SALES to the SCOTT schema: SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO) Load the values for the DOB column in the EMP table. Load a few records in the ITEMS table. How to do it… Here are the steps that you can follow to implement the preceding schema changes in the source environment: Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract. Check the latest timestamp read by the Extract and Datapump processes and ensure it is the current timestamp: GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-25 22:35:06 Seqno 350, RBA 11778560 SCN 0.11806849 (11806849) EXTRACT PGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-25 22:35:05.000000 RBA 7631 Stop the Extract and Datapump processes in the source environment: GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3: GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-25 22:25 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000061 2013-03-25 22:37:04.950188 RBA 10039 Stop the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed. Apply the schema changes to the source database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Apply the schema changes to the target database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Add supplemental logging for the newly added tables: GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@ DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch: GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered. Start the Extract and Datapump in the source environment: GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting Start the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting How it works... The preceding steps cover a high level procedure that you can follow to modify the structure of the replicated tables in your GoldenGate configuration. Before you start to alter any processes or parameter file, you need to ensure that the applications are stopped and no user sessions in the database are modifying the data in the tables that you are replicating. Once the application is stopped, we check that all the redo data has been processed by GoldenGate processes and then stop. At this point we run the scripts that need to be run to make DDL changes to the database. This step needs to be run on both the source and target database as we will not be replicating these changes using GoldenGate. Once this is done, we alter the GoldenGate processes to start from the current time and start them. There's more... Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in such cases where the environment does not satisfy these conditions: Specific tables defined in GoldenGate parameter files Unlike the earlier example, where the tables are defined in the parameter files using a schema qualifier for example SCOTT.*, if you have individual tables defined in the GoldenGateparameterfiles, you would need to modify the GoldenGate parameter files to add these newly created tables to include them in replication. Individual table permissions granted to the GoldenGate Admin user If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment. Supplemental logging for modified tables without any keys If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will have to be modified when the structure of the underlying table is modified. Supplemental log groups with all columns for modified tables In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns. Table structure changes in GoldenGate environments with different table definitions In this recipe you will learn how to perform table structure changes in a replication environment where the table structures in the source and target environments are not similar. Getting ready For this recipe we are making the following assumptions: GoldenGate is set up to replicate only DML changes between the source and target environments. The application will be stopped for making schema changes in the source environment. The table structures in the source and target databases are not similar. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database. The definition file was generated for the source schema and is configured in the replicat parameter file. The schema changes performed in this recipe are as follows: Add a new column called DOB (DATE) to the EMP table. Modify the DNAME column in the DEPT table to VARCHAR(20). Add a new table called ITEMS to the SCOTT schema: ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20) Add a new table called SALES to the SCOTT schema: SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO) Load the values for the DOB column in the EMP table. Load a few records in the ITEMS table. How to do it... Here are the steps that you can follow to implement the previous schema changes in the source environment: Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract. Check the latest timestamp read by the Extract and Datapump process, and ensure it is the current timestamp: GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-28 10:16:06 Seqno 352, RBA 12574320 SCN 0.11973456 (11973456) EXTRACT PGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-28 10:15:43.000000 RBA 8450 Stop the Extract and Datapump processes in the source environment: GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3: GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-28 10:15 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000062 2013-03-28 10:15:04.950188 RBA 10039 Stop the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed. Apply the schema changes to the source database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Apply the schema changes to the target database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Add supplemental logging for the newly added tables: GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES. Update the parameter file for generating definitions as follows: vi $GG_HOME/dirprm/defs.prm DEFSFILE ./dirdef/defs.def USERID ggate_admin@dboratest, PASSWORD XXXX TABLE SCOTT.EMP; TABLE SCOTT.DEPT; TABLE SCOTT.BONUS; TABLE SCOTT.DUMMY; TABLE SCOTT.SALGRADE; TABLE SCOTT.ITEMS; TABLE SCOTT.SALES; Generate the definitions in the source environment: ./defgen paramfile ./dirprm/defs.prm Push the definitions file to the target server using scp: scp ./dirdef/defs.def stdby1-ol6-112:/u01/app/ggate/dirdef/ Edit the Extract and Datapump process parameter to include the newly created tables if you have specified individual table names in them. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch: GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered. Start the Extract and Datapump in the source environment: GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting Edit the Replicat process parameter file to include the tables: ./ggsci EDIT PARAMS RGGTEST1 REPLICAT RGGTEST1 USERID GGATE_ADMIN@TGORTEST, PASSWORD GGATE_ADMIN DISCARDFILE /u01/app/ggate/dirrpt/RGGTEST1.dsc,append,MEGABYTES 500 SOURCEDEFS ./dirdef/defs.def MAP SCOTT.BONUS, TARGET SCOTT.BONUS; MAP SCOTT.SALGRADE, TARGET SCOTT.SALGRADE; MAP SCOTT.DEPT, TARGET SCOTT.DEPT; MAP SCOTT.DUMMY, TARGET SCOTT.DUMMY; MAP SCOTT.EMP, TARGET SCOTT.EMP; MAP SCOTT.EMP,TARGET SCOTT.EMP_DIFFCOL_ORDER; MAP SCOTT.EMP, TARGET SCOTT.EMP_EXTRACOL, COLMAP(USEDEFAULTS, LAST_UPDATE_TIME = @DATENOW ()); MAP SCOTT.SALES, TARGET SCOTT.SALES; MAP SCOTT.ITEMS, TARGET SCOTT.ITEMS; Start the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting How it works... You can follow the previously mentioned procedure to apply any DDL changes to the tables in the source database. This procedure is valid for environments where existing table structures between the source and the target databases are not similar. The key things to note in this method are: The changes should only be made when all the changes extracted by GoldenGate are applied to the target database, and the replication processes are stopped. Once the DDL changes have been performed in the source database, the definitions file needs to be regenerated. The changes that you are making to the table structures needs to be performed on both sides. There's more… Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in cases where the environment does not satisfy these conditions: Individual table permissions granted to the GoldenGate Admin user If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment. Supplemental logging for modified tables without any keys If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will need to be modified when the structure of the underlying table is modified. Supplemental log groups with all columns for modified tables In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns.
Read more
  • 0
  • 0
  • 4717

article-image-getting-store
Packt
28 Oct 2013
21 min read
Save for later

Getting into the Store

Packt
28 Oct 2013
21 min read
(For more resources related to this topic, see here.) This all starts by visiting https://appdev.microsoft.com/StorePortals, which will get you to the store dashboard that you use to submit and manage your applications. If you already have an account you'll just log in here and proceed. If not, we'll take a look at ways of getting it set up. There are a couple of ways to get a store account, which you will need before you can submit any game or application to the store. There are also two different types of accounts: Individual accounts Company accounts In most cases you will only need the first option. It's cheaper and easier to get, and you won't require the enterprise features provided by the company account for a game. For this reason we'll focus on the individual account. To register you'll need a credit card for verification, even if you gain a free account another way. Just follow the registration instructions, pay the fee, and complete verification, after which you'll be ready to go. Free accounts Students and developers with MSDN subscriptions can access registration codes that waive the fee for a minimum of one year. If you meet either of these requirements you can gain a code using the respective methods, and use that code during the registration process to set the fee to zero. Students can access their free accounts using the DreamSpark service that Microsoft runs. To access this you need to create an account on www.dreamspark.com and create an account. From there follow the steps to verify your student status and visit https://www.dreamspark.com/Student/Windows-Store-Access.aspx to get your registration code. If you have access to an MSDN subscription you can use this to gain a store account for free. Just log in to your account and in your account benefits overview you should be able to generate your registration code. Submitting your game So your game is polished and ready to go. What do you need to do to get it in the store? First log in to the dashboard and select Submit an App from the menu on the left. Here you can see the steps required to submit the app. This may look like a lot to do, but don't worry. Most of these are very simple to resolve and can be done before you even start working on the game. The first step is to choose a name for your game, and this can be done whenever you want. By reserving a name and creating the application entry you have a year to submit your application, giving you plenty of time to complete it. This is why it's a good idea to jump in and register your application once you have a name for it. If you change your mind later and want a different name you can always change it. The next step is to choose how and where you will sell your game. The other thing you need to choose here are the markets you want to sell your game in. This can be an area you need to be careful of, because the markets you choose here define the localization or content you need to watch for in your game. Certain markets are restrictive and including content that isn't appropriate for a market you say you want to sell in can cause you to fail the certification process. Once that is done you need to choose when you want to release your game—you can choose to release as soon as certification finishes or on a specific date, and then you choose the app category, which in this case will be Games. Don't forget to specify the genre of your game as the sub-category so players can find it. The final option on the Selling Details page which applies to us is the Hardware requirements section. Here we define the DirectX feature-level required for the game, and the minimum RAM required to run it. This is important because the store can help ensure that players don't try to play your game on systems that cannot run it. The next section allows you to define the in-app offers that will be made available to players. The Age rating and rating certificates section allows you to define the minimum age required to play the game, as well as submit official ratings certificates from ratings boards so that they may be displayed in the store to meet legal requirements. The latter part is optional in some cases, and may affect where you can submit your game depending on local laws. Aside from official ratings, all applications and games submitted to the store require a voluntary rating, chosen from one of the following age options: 3+ 7+ 12+ 16+ 18+ While all content is checked, the 7+ and 3+ ratings both have extra checks because of the extra requirements for those age ranges. The 3+ rating is especially restrictive as apps submitted with that age limit may not contain features that could connect to online services, collect personal information, or use the webcam or microphone. To play it safe it's recommended the 12+ rating is chosen, and if you're still uncertain, higher is safer. GDF Certificates The other entry required here if you have official ratings certificates is a GDF file. This is a Game Definition File, which defines the different ratings in a single location and provides the necessary information to display the rating and inform any parental settings. To do this you need to use the GDFMAKER.exe utility that ships with the Windows 8 SDK, and generate a GDF file which you can submit to the store. Alongside that you need to create a DLL containing that file (as a resource) without any entry point to include in the application package. For full details on how to create the GDF as well as the DLL, view the following MSDN article: http://msdn.microsoft.com/en-us/library/windows/apps/hh465153.aspx The final section before you need to submit your compiled application package is the cryptography declaration. For most games you should be able to declare that you aren't using any cryptography within the game and quickly move through this step. If you are using cryptography, including encrypting game saves or data files, you will need to declare that here and follow the instructions to either complete the step or provide an Export Control Classification Number (ECCN). Now you need to upload the compiled app package before you can continue, so we'll take a look at what it takes to do that before you continue. App packages To submit your game to the store, you need to package it up in a format that makes it easy to upload, and easy for the store to distribute. This is done by compiling the application as an .appx file. But before that happens we need to ensure we have defined all of the required metadata, and fulfill the certification requirements, otherwise we'll be uploading a package only to fail soon after. Part of this is done through the application manifest editor, which is accessible in Visual Studio by double-clicking on the Package.appxmanifest file in solution explorer. This editor is where you specify the name that will be seen in the start menu, as well as the icons used by the application. To pass certification all icons have to be provided at 100 percent DPI, which is referred to as Scale 100 in the editor. Icon/Image Base resolution Required Standard 150 x 150 px Yes Wide 310 x 150 px If Wide Tile Enabled Small 30 x 30 px Yes Store 50 x 50 px Yes Badge 24 x 24 px If Toasts Enabled Splash 620 x 300 px Yes If you wish to provide a higher quality images for people running on high DPI setups, you can do so with a simple filename change. If you add scale-XXX to your filename, just before the extension, and replace XXX with one of the following values, Windows will automatically make use of it at the appropriate DPI. scale-100 scale-140 scale-180 In the following image you can see the options available for editing the visual assets in the application. These all apply to the start menu and application start-up experience, including the splash screen and toast notifications. Toast Notifications in Windows 8 are pop-up notifications that slide in from the edge of the screen and show the users some information for a short period of time. They can click on the toast to open the application if they want. Alongside Live Tiles, Toast Notifications allow you to give the user information when the application is not running (although they work when the application is running). The previous table shows the different images you required for a Windows 8 application, and if they are mandatory or just required in certain situations. Note that this does not include the imagery required for the store, which includes some screenshots of the application and optional promotional art in case you want your application to be featured. You must replace all of the required icons with your own. Automated checks during certification will detect the use of the default "box" icon shown in the previous screenshot and automatically fail the submission. Capabilities Once you have the visual aspects in place, you need to declare the capabilities that the application will receive. Your game may not need any, however you should still only specify what you need to run, as some of these capabilities come with extra implications and non-obvious requirements. Adding a privacy policy One of those requirements is the privacy policy. Even if you are creating a game, there may be situations where you are collecting private information, which requires you to have a privacy policy. The biggest issue here is connecting to the internet. If your game marks any of the Internet capabilities in the manifest, you automatically trigger a check for a privacy policy as private information—in this case an IP address—is being shared. To avoid failing certification for this, you need to put together a privacy policy if you collect privacy information, or use any of the capabilities that would indicate you collect information. These include the Internet capabilities as well as location, webcam, and microphone capabilities. This privacy policy just needs to describe what you will do with the information, and directly mention your game and publisher name. Once you have the policy written, it needs to be posted in two locations. The first is a publicly accessible website, which you will provide a link to when filling out the description after uploading your game. The second is within the game itself. It is recommended you place this policy in the Windows 8 provided settings menu, which you can build using XAML or your own code. If you're going with a completely native Windows 8 application you may want to display the policy in your own way and link to it from options within your game. Declarations Once you've indicated the capabilities you want, you need to declare any operating system integration you've done. For most games you won't use this, however if you're taking advantage of Windows 8 features such as share targets (the destination for data shared using the Share Charm), or you have a Game Definition File, you will need to declare it here and provide the required information for the operating system. In the case of the GDF, you need to provide the file so that the parental controls system can make use of the ratings to appropriately control access. Certification kit The next step is to make sure you aren't going to fail the automated tests during certification. Microsoft provides the same automated tests used when you submit your app in the Windows Application Certification Kit (WACK). WACK is installed by default with Visual Studio 2012 or higher version. There are two ways to run the test: after you build your application package, or by running the kit directly against an installed app. We'll look at the latter first, as you might want to run the test on your deployed test game well before you build anything for the store. This is also the only way to run the WACK on a WinRT device, if you want to cover all bases. If you haven't already deployed or tested your app, deploy it using the Build menu in Visual Studio and then search for the Windows App Cert Kit using the start menu (just start typing). When you run this you will be given an option to choose which type of application you want to validate. In this case we want to select the Windows Store App option, which will then give you access to the list of apps installed on your machine. From there it's just a matter of selecting the app you want and starting the test. At this point you will want to leave your machine alone until the automated tests are complete. Any interference could lead to an incorrect failure of the certification tests. The results will indicate ways you can fix any issues; however, you should be fine for most of the tests. The biggest issues will arise from third party libraries that haven't been developed or ported to Windows 8. In this case the only option is to fix them yourself (if they're open source) or find an alternative. Once you have the test passing, or you feel confident that it won't be an issue, you need to create app packages that are compatible with the store. At this point your game will be associated with the submission you have created in the Windows Store dashboard so that it is prepared for upload. Creating your app packages To do this, right click on your game project in Visual Studio and click on Create App Packages inside the Store menu. Once you do that, you'll be asked if you want to create a package for the store. The difference between the two options comes down to how the package is signed. If you choose No here, you can create a package with your test certificate, which can be distributed for testing. These packages must be manually installed and cannot be submitted to the store. You can, however, use this type of package on other machines to install your game for testers to try out. Choosing No will give you a folder with a .ps1 file (Powershell), which you can run to execute the install script. Choosing Yes at this option will take you to a login screen where you can enter your Windows Store developer account details. Once you've logged in you will be presented with a list of applications that you have registered with the store. If you haven't yet reserved the name of your application, you can click on the Reserve Name link, which will take you directly to the appropriate page in the store dashboard. Otherwise select the name of the game you're trying to build and click on Next. The next screen will allow you to specify which architectures to build for, and the version number of the built package. As this is a C++ game, we need to provide separate packages for the ARM, x86, and x64 builds, depending on what you want to support. Simply providing an x86 and ARM build will cover the entire market; 64 bit can be nice to have if you need a lot of memory, but ultimately it is optional, and some users may not even be able to run x64 code. When you're ready click on Create and Visual Studio will proceed to build your game and compile the requested packages, placing them in the directory specified. If you've built for the store, you will need the .appxupload files from this directory when you proceed to upload your game. Once the build has completed you will be asked if you want to launch the Windows Application Certification Kit. As mentioned previously this will test your game for certification failures, and if you're submitting to the store it's strongly recommended you run this. Doing so at this screen will automatically deploy the built package and run the test, so ensure you have a little bit of time to let it run. Uploading and submitting Now that you have a built app package you can return to the store dashboard to submit your game. Just edit the submission you made previously and enter the Packages section, which will take you to the page where you can upload the appxupload file. Once you have successfully uploaded your game you will gain access to the next section, the Description. This is where you define the details that will be displayed in the store. This is also where your marketing skills come into play as you prepare the content that will hopefully get players to buy your game. You start with the description of your game, and any big feature bullet points you want to emphasize. This is the best place to mention any reviews or praise, as well as give a quick description that will help the players decide if they want to try your game. You can have a number of app features listed; however, like any "back of the box" bullet points, keep them short and exciting. Along with the description, the store requires at least one screenshot to display to the potential player. These screenshots need to be of the entire screen, and that means they need to be at least 1366x768, which is the minimum resolution of Windows 8. These are also one of the best ways to promote your game, so ensure you take some great screenshots that show off the fun and appeal of your game. There are a few ways to take a screenshot of your game. If you're testing in the simulator you can use the screenshot icon on the right toolbar of the simulator to take the screenshot. If not, you can use Windows Key + Prt Scr SysRq to take a screenshot of your entire screen, and then use that (or edit it if you have multiple monitors). Screenshots taken with either of these tools can be found in the Screenshots folder within your Pictures library. There are two other small pieces of information that are required during this stage: Copyright Info and Support contact info. For the support info, an e-mail address will usually suffice. At this point you can also include your website and, if applicable to your game, a link to the privacy policy included in your game. Note that if you require a privacy policy, it must be included in two places: your game, and the privacy policy field on this form. The last items you may want to add here are promotional images. These images are intended for use in store promotions and allow Microsoft to easily feature your game with larger promotional imagery in prominent locations within the store. If you are serious about maximizing the reach of your game, you will want to include these images. If you don't, the number of places your game can be featured will be reduced. At a minimum the 414x180 px image should be included if you want some form of promotion. Now you're almost done! The next section allows you to leave notes for the testing team. This is where you would leave test account details for any features in your game that require an account so that they can test those features. This is also the location to leave any notes about testing in case there are situations where you can point out any features that might not be obvious. In certain situations you may have an exemption from Microsoft for a certification requirement; this would be where you include that exemption. When every step has been completed and you have tick marks in all of the stages, the Submit for Certification button will unlock, allowing you to complete your submission and send it off for certification. At this stage a number of automated tests will run before human testers will try your game on a variety of devices to ensure it fits the requirements for the store. If all goes well, you will receive an email notifying you of your successful certification and, depending on if you set the release date as ASAP, you will find your game in the store a few hours later (it may take a few hours to appear in the store once you receive an email informing you that your game or app is in the store). Certification tips Your first stop should be the certification requirements page, which lists all of the current requirements your game will be tested for: http://msdn.microsoft.com/en-us/library/windows/apps/hh694083.aspx. There are some requirements that you should take note of, and in this section we'll take a look at ways to help ensure you pass those requirements. Privacy The first of course is the privacy policy. As mentioned before, if your game collects any sort of personal information, you will need that policy in two places: In full text within the game Accessible through an Internet link The default app template generated by Visual Studio automatically enables the Internet capability, and by simply having that enabled you require a privacy policy. If you aren't connecting to the Internet at all in your game, you should always ensure that none of the Internet options are enabled before you package your game. If you share any personal information, then you need to provide players with a method of opting in to the sharing. This could be done by gating the functionality behind a login screen. Note that this functionality can be locked away, and the requirement doesn't demand that you find a way to remain fully functional even if the user opts out. Features One requirement is that your game support both touch input and keyboard/mouse input. You can easily support this by using an input system like the one described in this article; however, by supporting touch input you get mouse input for free and technically fulfill this requirement. It's all about how much effort you want to put into the experience your player will have, and that's why including gamepad input is recommended as some players may want to use a connected Xbox 360 gamepad for their input device in games. Legacy APIs Although your game might run while using legacy APIs, it won't pass certification. This is checked through an automated test that also occurs during the WACK testing process, so you can easily check if you have used any illegal APIs. This often arises in third party libraries which make use of parts of the standard IO library such as the console, or the insecure versions of functions such as strcpy or fopen. Some of these APIs don't exist in WinRT for good reason; the console, for example, just doesn't exist, so calling APIs that work directly with the console makes no sense, and isn't allowed. Debug Another issue that may arise through the use of third-party libraries is that some of them may be compiled in debug mode. This could present issues at runtime for your app, and the packaging system will happily include these when compiling your game, unless it has to compile them itself. This is detected by the WACK and can be resolved by finding a release mode version, or recompiling the library. WACK The final tip is: run WACK. This kit quickly and easily finds most of the issues you may encounter during certification, and you see the issues immediately rather than waiting for it to fail during the certification process. Your final step before submitting to the store should be to run WACK, and even while developing it's a good idea to compile in release mode and run the tests to just make sure nothing is broken. Summary By now you should know how to submit your game to the store, and get through certification with little to no issues. We've looked at what you require for the store including imagery and metadata, as well as how to make use of the Windows Application Certification Kit to find problems early on and fix them up without waiting hours or days for certification to fail your game. One area unique to games that we have covered in this article is game ratings. If you're developing your game for certain markets where ratings are required, or if you are developing children's games, you may need to get a rating certificate, and hopefully you have an idea of where to look to do this. Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 4710
article-image-creating-installing-and-tweaking-your-theme-using-plone-3
Packt
26 May 2010
12 min read
Save for later

Creating, Installing and Tweaking your theme using Plone 3

Packt
26 May 2010
12 min read
(For more resources on Plone here.) We will first inspect a few structural changes and install them, and then finally examine the various components and skin layer items that have been changed, one at a time. Where restarting Zope or rerunning your buildout would be required, this will be noted. About the theme This theme and its design are available for personal and professional use to anyone, and can be freely modified. You can (and should) download the files from https://svn.plone.org/svn/collective/plonetheme.guria/trunk using the following command: svn co https://svn.plone.org/svn/collective/plonetheme.guria/trunk plonetheme.guria Note the space between the words trunk and plonetheme.guria. This theme is intended for installation on Plone 3 web sites. The finished theme should look like the following, but we have work to do to make this happen: This theme was created by me, for use by a charity group in India, called Guria (http://www.guriaindia.org), dedicated to ending human trafficking and prostitution. The finished site is currently in development, and is generously hosted free of charge by the talented folks at Six Feet Up (sixfeetup.com). Additionally, most of the code and lessons learned come courtesy of similar themes created by the staff at ONE/Northwest in Seattle, Washington. The design for this theme was created with the assumption that most of the tasks would need to be present in this theme. In fact, the only task not covered here is the creation of a new viewlet manager. Creation of viewlet managers is discussed at http://plone.org/documentation/how-to/adding-portlet-managers and http://plone.org/documentation/manual/theme-reference/elements/viewletmanager/override. Creating a theme product I created a theme product named plonetheme.guria, using the command line syntax paster create –t plone3_theme, while we were located in the src/ directory of our buildout, as seen next: [bash: /opt/mybuildout/src] paster create -t plone3_theme plonetheme.guriaSelected and implied templates: ZopeSkel#basic_namespace A project with a namespace package ZopeSkel#plone A Plone project ZopeSkel#plone3_theme A Theme for Plone 3.0Variables: egg: plonetheme.guria package: plonethemeguria project: plonetheme.guriaEnter namespace_package (Namespace package (like plonetheme)) ['plonetheme']:Enter package (The package contained namespace package (like example)) ['example']: guriaEnter skinname (The skin selection to be added to 'portal_skins' (like 'My Theme')) ['']: Guria Theme for the Plone Theming BookEnter skinbase (Name of the skin selection from which the new one will be copied) ['Plone Default']:Enter empty_styles (Override default public stylesheets with empty ones?) [True]: FalseEnter include_doc (Include in-line documentation in generated code?) [False]:Enter zope2product (Are you creating a Zope 2 Product?) [True]:Enter version (Version) ['0.1']:Enter description (One-line description of the package) ['An installable theme for Plone 3.0']:Enter long_description (Multi-line description (in reST)) ['']:Enter author (Author name) ['Plone Collective']: Veda WilliamsEnter author_email (Author email) ['product-developers@lists. plone.org']: email@email.comEnter keywords (Space-separated keywords/tags) ['web zope plone theme']:Enter url (URL of homepage) ['http://svn.plone.org/svn/collective/']:Enter license_name (License name) ['GPL']:Enter zip_safe (True/False: if the package can be distributed as a .zip file) [False]:Creating template basic_namespaceCreating directory ./plonetheme.guria[snip] You may wish to generate a new Plone theme product yourself, so that you can compare and contrast the differences between the Guria theme and a vanilla Plone theme. Notice that the full name of the theme is plonetheme.guria, and where an item shows as blank, it defaults to the example value in that step. In other words, the namespace package defaults to plonetheme, because there was no reason to change it. The skinname is set to a single lowercase word out of stylistic preference. It's important to also note that you should not use hyphens or spaces in your theme names, as they will not be recognized by your buildout. We've chosen not to override Plone's default stylesheets, and instead, we want to build on top of Plone's default (and excellent!) stylesheets. I prefer this method mostly because the layout needed for Plone's Contents view and other complex structural pieces are already taken care of by Plone's base stylesheets. It's easier than trying to rebuild those from scratch every time, but this is merely a personal preference. Following the creation of the theme, we register the theme product in our buildout.cfg, using the following syntax [buildout] ...develop = src/plonetheme.guria...[instance]eggs = plonetheme.guria...zcml = plonetheme.guria... If we were using the eggtractor egg, there would be no need to add these lines of code to our buildout.cfg; all we would need to do is rebuild our buildout and it would automatically recognize the new egg. eggtractor can be found at http://pypi.python.org/pypi/buildout.eggtractor, and is documented thoroughly. Assuming we are not using eggtractor, we must rebuild our buildout, as we have altered ZCML code and added a new egg: [bash: /opt/mybuildout/src/] ./bin/buildout This would be a good time to check your vanilla theme product into Subversion, so that you can track back to the original version, if needed. However, since this is an existing theme, there is no need to do so. For the purposes of following along, it might be best if you do not yet install the theme. We want to make some changes first. However, we will point out some caveats along the way, in case you installed the theme prematurely. Altering the theme product's structure Several modifications have been made to the theme product's structure to shorten folder names and change the default behavior. Again, this is mostly a personal preference. Let's take a look at these changes and how they were achieved. Renaming the theme In our theme product, you will see a file named profiles.zcml, located at mybuildout/src/plonetheme.guria/plonetheme/guria/profiles.zcml. The code looks like this: <configure i18n_domain="plonetheme.guria"> <genericsetup:registerProfile name="default" title="Guria Theme for the Plone Theming Book" directory="profiles/default" description='Extension profile for the "Guria Theme for the Plone Theming Book" Plone theme.' provides="Products.GenericSetup.interfaces.EXTENSION" /></configure> If you named your theme in a way that was less descriptive, you could alter the title. Naming your theme product properly is important, because you may have different types of products used for a given web site—for example, a policy product for content that might be used in tandem with your theme product. This text is what you see in the portal_quickinstaller at http://localhost:8080/mysite/portal_quickinstaller/manage_installProductsForm, where mysite is the name of your Plone site. You can also see this name if you install your theme product via Site Setup Add-on Products|, found at http://localhost:8080/mysite/prefs_install_products_form. If you change your XML here, and your theme product is already installed, you'll need to start (or restart) your Zope instance, using: [bash: /opt/mybuildout] ./bin/instance fg Shortening folder names Next, we look at the folder structure of our theme product. The standard Plone 3 theme produces folders with names like plonetheme_guria_custom_images, plonetheme_guria_custom_templates, and plonetheme_guria_styles. While there is nothing wrong with keeping this structure, it can be cumbersome to type or tab through (especially when checking items into Subversion). However, you might want to keep the existing folder names to help you distinguish which items of base Plone you modified. This can make migrations easier. If you choose this route, you probably want to create additional folders for non-base-Plone items. I personally prefer the shorter folder names and don't worry too much about the migration issues. In the case of this theme product, I opted to make the folder names shorter. First, I altered the names of the folders in the skins/ folder to guria_images, guria_styles, and guria_templates. Then, in the theme, go to mybuildout/plonetheme.guria/plonetheme/guria/skins.zcml. The code in this file is altered to appear as follows: <configure i18n_domain="plonetheme.guria"> <!-- File System Directory Views registration --> <cmf:registerDirectory name="guria_images"/> <cmf:registerDirectory name="guria_templates"/> <cmf:registerDirectory name="guria_styles"/></configure> One more step is required here. In plonetheme.guria/plonetheme/guria/profiles/default/skins.xml, the code is changed to read as follows: <?xml version="1.0"?><object name="portal_skins" allow_any="False" cookie_persistence="False" default_skin=" Guria Theme for the Plone Theming Book "> <object name="guria_images" meta_type="Filesystem Directory View" directory="plonetheme.guria:skins/guria_images"/> <object name="guria_templates" meta_type="Filesystem Directory View" directory="plonetheme.guria:skins/guria_templates"/> <object name="guria_styles" meta_type="Filesystem Directory View" directory="plonetheme.guria:skins/guria_styles"/> <skin-path name=" Guria Theme for the Plone Theming Book " based- on="Plone Default"> <layer name="guria_images" insert-after="custom"/> <layer name="guria_templates" insert-after="guria_images"/> <layer name="guria_styles" insert-after="guria_templates"/> </skin-path></object> Basically, the steps are the following:   Rename the folders on the filesystem. Modify the skins.zcml file to change the name of the filesystem directory view (what you see in the portal_skins/properties area of the ZMI). Modify the skins.xml file in the profiles/default folder to match. This alters the basic profile of your theme product. If you wanted to add additional folders and filesystem directory views here (a scripts/ folder, for example), you'd just add code by following the conventions given to you in these files and then create additional folders. Making changes to the ZCML file means that you would need to do a restart of your Zope instance. If you installed your theme product before making the changes to the skin layer names, you might want to inspect the skin layers at http://localhost:8080/mysite/ portal_skins/manage_propertiesForm, to make sure that the correct skin layers are listed. You might even need to reimport the "skins tool" step via portal_setup at http://localhost:8080/mysite/portal_setup/manage_importSteps. Make sure you choose the correct profile first by choosing your theme product's name from the drop-down list at the top of the import page. The theme product's name is the same name as you find in your profiles.zcml file. Adjusting how stylesheets and images are used Next, we remove some of the default behavior given to us by the plone3_theme recipe. In a vanilla theme product, folders named images/ and stylesheets/ are inserted into the plonetheme.guria/plonetheme/guria/browser/ directory. Additionally, a file named main.css is included in the stylesheets/ directory. I chose not to place the theme's images or stylesheets in the browser/ directory, as this is generally unnecessary for most themes. Advanced programmers may wish to expose these items to the browser layer, but this is generally a personal choice and carries with it additional consequences. I deleted the folders mentioned above, as well as the i file. Then, I opened the file named configure.zcml, located at plonetheme.guria/plonetheme/guria/browser/, and removed all of the following boilerplate text: <!-- Viewlets registration --> <!-- Zope 3 browser resources --> <!-- Resource directory for images --> <browser:resourceDirectory name="plonetheme.guria.images" directory="images" layer=".interfaces.IThemeSpecific" /> <!-- Resource directory for stylesheets --> <browser:resourceDirectory name="plonetheme.guria.stylesheets" directory="stylesheets" layer=".interfaces.IThemeSpecific" /> I then removed the highlighted code below fromI then removed the highlighted code below from plonetheme.guria/plonetheme/guria/profiles/default/cssregistry.xml:: <stylesheet title="" id="++resource++plonetheme.guria.stylesheets/main.css" media="screen" rel="stylesheet" rendering="import" cacheable="True" compression="safe" cookable="True" enabled="1" expression=""/> And replaced it with the following: <stylesheet title="" id="guria.css" media="screen" rel="stylesheet" rendering="import" cacheable="True" compression="safe" cookable="True" enabled="1" expression=""/> This, in effect, tells our theme product that we will be using a stylesheet named guria.css (or more correctly, guria.css.dtml, as we'll see in a moment). This stylesheet does not yet exist, so we have to create it. I wanted the option of making use of the DTML behavior provided by Plone, so that I could use certain base properties provided to us via the base_properties.props file (also located in our skins/guria_styles/ folder). DTML essentially allows us to use property-sheet variables and apply changes on a more global scale. The easiest way to create this new stylesheet is to go to your mybuildout/buildout-cache/eggs/Plone[some version number]/Products/CMFPlone/skins/plone_styles/ploneCustom.css and copy the contents of that file into a new stylesheet (named guria.css.dtml) in your theme's guria_styles/ folder (located in the skins/ directory at mybuildout/plonetheme.guria/plonetheme/guria/skins/guria_styles). The important bits of code you want are as follows: /* <dtml-with base_properties> (do not remove this :) *//* <dtml-call "REQUEST.set('portal_url', portal_url())"> (not this either :) *//* DELETE THIS LINE AND PUT YOUR CUSTOM STUFF HERE */ /* </dtml-with> */ Again, we would need to restart our Zope at this point, as we have modified our ZCML. If we had already installed our theme product, we'd also have to import our cssregistry.xml file via portal_setup in the ZMI, to capture the new GenericSetup profile settings. However, we have not yet installed the product, so we do not need to worry about this.
Read more
  • 0
  • 0
  • 4702

article-image-planning-your-site-adobe-muse
Packt
27 Sep 2012
11 min read
Save for later

Planning Your Site in Adobe Muse

Packt
27 Sep 2012
11 min read
Page layouts The layout of your website can be a deciding factor on whether your visitors will stay on your website or leave with an impatient click. You could think of the layout as a map. If it's easy for the visitors to "read" the map, they are more likely to stick around and find the content you're providing. Let's take a look at some of the typical layouts we see on the Web today. Bread and butter layouts When we're designing a layout for a web page, there are a number of sections that we need to include. These sections can be broken into the following: Consistent content: This does not change throughout the site. Examples of this type of content are logos, navigation bars, and footers. Changing content: This is the part of the page that changes throughout the site, usually the main content. In some situations, the content of the sidebar may also change. A web designer's job is to create a layout that keeps the visitor focused on the content while keeping it nice and easy to navigate around the site. Some examples of conventional or bread and butter site layouts are shown in the following figure: You have a very short amount of time to capture a visitor's attention. So by choosing one of these basic layouts, you're using a tried and tested setup, which many web users will feel at home with. Don't worry that these layouts look "boxy". You can use images, colors, and typefaces, which complement the purpose of your site to completely disguise the fact that every web page is essentially made up of boxes. The bread and butter layouts featured previously are well-tested guides; however, there is absolutely no obligation for you to use one of these. What appears on a typical web page? So we've seen some basic layouts. Now we'll look at some of the elements that appear on (nearly) every web page. Logo The logo is the part of a company's overall branding and identity, and appears at the top of each page on the site along with the company name and tagline, just as it would on printed forms of marketing, such as business cards, brochures, and letterheads. This identity block increases brand recognition and ensures users know that the pages they're viewing are part of a single site. Frequently, the logo is also a link back to the home page of the site. Navigation bar The navigation for your site should be easy to use and easy to find. Just like the logo, it should appear near the top of the page. You may decide to use a horizontal menu across the top of the page, a vertical menu in a sidebar, or a combination of the two. Either way, your main navigation should be visible "above the fold", that is, any area of a web page that can be viewed without visitors having to scroll. Content Content is the King. This is what your visitors have come for. If the visitors can't find what they're looking for, they will move on very quickly. The main content is an important focal point in your design; don't waste time fling it with unnecessary "stuff". Footer Sitting at the bottom of the page, the footer usually holds copyright information, contact links information, and legalities of the site. Some designers have become very imaginative with footers and use this area to hold additional links, tweets, and "about me" information. The footer clearly separates the main content from the end of the page and indicates to users that they're at the bottom of the page. In the following screenshot, you can see a page from the Apple website, which is highly regarded for its aesthetic design. Each section is clearly delineated. If you keep in mind the idea of your site's layout as a map, then you can determine where you want to lead your visitors on the site. Wireframes Wireframing is an important part of the design process for both simple and complex projects. If you're creating a website for a client, a wireframe is a great tool for communicating your ideas visually at an early stage rather than just having a verbal description. If you're creating a website for yourself, the wireframe helps to clarify what is required on each page of your website. It can be considered an overlap between the planning process and the design process. Creating a simple wireframe ensures that your page designs take into account all of the elements you'll add to your pages and where they will be positioned. Wireframes are cost-effective because the time spent in the initial stages potentially saves you from losing much more time revising the design at a later stage. Wireframes can be created in several ways, including pen and paper and computer-based tools. When it comes to computer-based applications for wireframing, there are many options available. Some designers use Photoshop, Illustrator, or even InDesign to put together their wireframes. Specific wireframing software packages that are popular with web designers include Balsamiq and OmniGraffe. Wireframes and Mockups and Prototypes. Oh my! You may hear web designers refer to wireframes, mockups, and prototypes. Although these terms are sometimes used interchangeably, it's important to understand that they are three different things. A wireframe is a basic illustration showing the structure and components of a web page. A mockup is an image file focusing on the design elements in the site. It contains the graphics and other page elements that make up the web page but may contain dummy text and images. A prototype is an almost-complete or semi-functional web page, constructed with HTML and CSS. Prototypes give the client (or yourself) the ability to click around and check out how the final site will work. What to include in a wireframe? Think about which design elements will appear on each page of your website. As mentioned already, most websites will have elements such as logos, navigation, search bars, and footers in consistent positions throughout the site. Next, think about any extra elements that may be specific to individual pages. These include graphics and dynamic widgets. Once you know what's required, you can start to create your wireframe based on these elements. Some designers like to create their wireframes with the "big picture" in mind. Once the basic layout is in place, they get feedback from the client and revise the wireframe if necessary. Others like to create a very detailed wireframe for each page on the site, including every single design element on the list before showing it to the client. Wireframes let us try out several different ideas before settling on our favorite design, which can then be brought forward to the mockup stage. Obviously our focus in this book is on using Muse, but I would urge you not to rule out using paper sketches. It's a great way to quickly get ideas out of your head and into a tangible, visible layout. Web.without.words (www.webwithoutwords.com) is an interesting website dedicated to showing popular and well-known sites as wireframes. The text and images on each site are blocked out and it's a nice way to look at web pages and see how they can be broken down into simple boxes without getting caught up in the content. Wireframes with Muse So what are the advantages of using Muse to create our wireframes? Well, Muse not only lets you create wireframes, but it also allows you to quickly create prototypes using those wireframes. This means you can show clients the functionality of the website with the basic layout. The prototype produced by Muse can be reviewed on any web browser giving the client a good idea of how the site will appear. This kind of early testing can help alleviate time-consuming problems further down the line of the design process. We're going to prepare a site and wireframe now for a fictitious website about windsurfing. First, we'll create a new site, and then add pages in the Plan view. Site structure with Plan view. Let's start by creating a new site. Open Muse. Choose File | New Site. In the New Site dialog box, set Page Width to 960 and Min Height to 800 pixels. Set Margins to 0 all around and Padding Top and Bottom to 10 pixels each. Set the number of Columns to 16. The columns appear as guidelines on the page and we use them to help us align the design elements on our layout. Note that Gutter is set to 20 by default, leave this as it is. The Column Width is calculated by Muse and you should see a value of 41 appear automatically in that field. Remember that all of these values can be changed later if necessary. Click on OK. The Plan view opens and you'll see a thumbnail representing the Home page at the top left, and a thumbnail representing the A-Master page on the bottom pane. Save your site right away by selecting File | Save Site. Give it a descriptive name you'll recognize, such as Windsurf.muse. To create new pages, click on the plus (+) sign to the right of or below the existing pages, and then click on the page's name field to type its name. Click on the plus sign to the right of the Home page and name the new page Gear. Click on the plus sign below the Gear page to add a subpage and name that page Sails. Click on the plus sign to the right of the Sails page and name the new page Boards. Sails and Boards are now on the same level and are subpages of the Gear page. Click on the plus sign to the right of the Gear page and name the new page Learning. Click on the plus sign to the right of Learning and add one more page called Contact. Your Plan view should now look like the following screenshot: Working with thumbnails in the Plan view It's easy to add, delete, reposition, or duplicate pages when working in the Plan view. Right-click (Win) or Ctrl + click (Mac) on a thumbnail to see a contextual menu. This menu provides every option for managing your pages. In the previous screenshot, you can see the menu that appears when you right-click/Ctrl + click.   New Child Page: This option creates a new blank page at a lower level as the current thumbnail. New Sibling Page: This option creates a new blank page at the same level as the current thumbnail. Duplicate Page: This option makes an exact copy of the current page. This is most useful when you have added content and applied some formatting. Delete Page: This option gets rid of the page. Rename Page: This option allows us to change the name of the page. Go to Page: This option opens up the current page in the Design view. Page Properties: This option opens the Page Properties dialog box allowing you to set properties for the current page only. Reset Page Properties: This option reverts to the original settings for the page. Export Page: This option allows you to export your page as HTML and CSS. Menu Options: This option allows you to choose how the page will be included (or not included) in the automatically-created menu. Masters: This option lets you choose which Master design will be applied to the page. The context menu is not the only way to get to these options, for example the most common tasks in the Plan view can be completed as follows: You can rename a page by double-clicking on the page name You can delete a page by hovering your mouse over the thumbnail and then clicking on the x icon that appears in the top-right corner To reposition a page in your site map hierarchy, you can drag-and-drop a thumbnail on the same level or on a sublevel. Spend a couple of minutes adding, deleting, and repositioning pages so that you get a feel of creating the site structure. You'll find the Plan view to be intuitive to use and extremely fast for creating site maps. You can choose Edit | Undo to undo any of the steps you've taken. Muse tracks all the page names, and later in the design process it allows us to create menus quickly using menu widgets. All links created in the Plan view are maintained and are updated automatically if we make a change to the site structure. You can come back to the Plan view at any point during your web design process.
Read more
  • 0
  • 0
  • 4697

article-image-users-profiles-and-connections-elgg
Packt
23 Oct 2009
8 min read
Save for later

Users, Profiles, and Connections in Elgg

Packt
23 Oct 2009
8 min read
Connecting to Friends and Users I hope you're convinced how important friends are to a social network. Initially, you'll have to manually invite your friends over to join. I say initially, because membership on a social network is viral. Once your friends are registered members of your network, they can also bring in their own friends. This means that soon your friends would have invited their own friends as well. Chances are that you might not know these friends of your friends. So, Elgg not only allows you to invite friends from outside, but also connect with users already on the network. Let's understand these situations in real-life terms. You invite your friends over to a party with you at your new Star Trek themed club. That's what you'll do with Elgg, initially. So your friends like the place and next time around they bring in more friends from work. These friends of friends from work talk about your place with their friends and so on, until you're hosting a bunch of people in the club that you haven't ever met in your life. You overhear some people discussing Geordi La Forge, your favorite character from the show. You invite them over for drinks. That's connecting with users already on the network. So let's head on over to Elgg and invite some friends! Inviting Friends to Join There are two ways of inviting users to join your network. Either send them an email with a link to join the website, or let Elgg handle sending them emails. If you send them emails, you can include a direct link to the registration page. This link is also on the front page of your network, which every visitor will see. It asks visitors to register an account if they like what's on the network. Let Elgg Handle Registration This is the most popular method of inviting users to join the network. It's accessible not only to you, but also to your friends once they've registered with the network. To allow Elgg to send emails on your behalf, you'll have to be logged into Elgg. Once you login, click on the Your Network button on the top navigation bar. This will take you to a page, which links to tools that'll help you connect with others. The last link in this bar (Invite a Friend) does exactly what it says. When you click on this link, it'll explain to you some benefits of inviting friends over. The page has three fields; Their name: Enter the name of the friend you're sending the invitation to. Their email address: Very important. This is the address to where the invitation is sent. An optional message: Elgg sends an email composed using a template. If you want to add a personal message to Elgg's email, you can do so here. In the email, which Elgg sends on behalf of the network's administrator, that means you, it displays the optional message (if you've sent one), along with a link to the registration page. The invitation is valid for seven days, after which the registration link in the email isn't valid. When your friends click on the registration form, it asks them to enter their: Name: This is your friend's real name. When he arrives here by clicking the link in the email, this field already has the same name as the one in the email. Of course, your friend can choose to change it if he pleases. Username: The name your friend wants to use to log in to the network. Elgg automatically suggests one based on your friend's real name. Password: The last two fields ask your friend to enter (and then re-enter to confirm) a password. This is used along with the username to authenticate him on the system. Once your friends enter all the details and click on join, Elgg creates an account for them, logs them in, and dispatches a message to them containing the log in details for reference. Build a Profile The first thing a new user has to do on the network is to create his profile. If you haven't yet built up a profile yourself, now is a good time. To recap, your profile is your digital self. By filling in a form, Elgg helps you define yourself in terms that'll help other members find and connect to you. This is again where socializing using Elgg outscores socializing in real life. You can find people with similar tastes, likes, and dislikes, as soon as you enter the network. So let's steam ahead and create a digital you. The Various Profile Options Once you are logged into your Elgg network, select the Your Profile option from the top navigation-bar. In the page that opens, click the first link, Edit this profile. This opens up a form, divided into five tabs—Basic details, Location, Contact, Employment, and Education. Each tab helps you fill in details regarding that particular area. You don't necessarily have to fill in each and every detail. And you definitely don't have to fill them all in one go. Each tab has a Save your profile button at the end. When you press this button, Elgg updates your profile instantaneously. You can fill in as much detail as you want, and keep coming back to edit your profile, and append new information. Let's look at the various tabs: Basic details: Although filling information in any tab is optional, I'd advise you to fill in all details in this tab. This will make it easy, for you to find others, and for others to find you. The tab basically asks you to introduce yourself, list your interests, your likes, your dislikes, your goals in life, and your main skills. Location: This tab requests information that'll help members reach you physically. Fill in your street address, town, state, postal code, and country. Contact: Do you want members to contact you outside your Elgg network? This tab requests both physical as well as electronic means which members can use to get in touch with you. Physical details include your work, home, and mobile telephone number. Electronic details include your email address, your personal, and official websites. Elgg can also list information to help users connect to you on instant messenger. It supports ICQ, MSN, AIM, Skype, and Jabber. Employment: List your occupation, the industry, and company you work in, your job title, and description. Elgg also lets you list your career goals and suggests you do so to "let colleagues and potential employers know what you'd like to get out of your career. Education: Here you can specify your level of education, and which high school, university or college you attended, and the degree you hold. As you can clearly see, Elgg's profiling options are very diverse and detailed. Rather than serve the sole purpose of describing you to the visitors, the profile also helps you find new friends as well, as we'll see later in this article. What is FOAF? While filling the profile, you must have noticed an Upload a FOAF file area down at the bottom of all tabs. FOAF or Friend of a Friend is a project (http://www.foaf-project.org/) to help create "machine-readable pages that describe people, the links between them, and the things they create, and do". The FOAF file includes lots of details about you, and if you have already created a FOAF profile, Elgg can use that to pick out information describing you from in there. You can modify the information once it's imported into Elgg, if you feel the need to do so. The FOAF-a-Matic tool (http://www.ldodds.com/foaf/foaf-a-matic.en.html) is a simple Web-based program you can use to create a FOAF profile. A Face for Your Profile Once you have created your digital self, why not give it a face as well. The default Elgg picture with a question mark doesn't look like you! To upload your picture, head over to Your Profile and select the Change site picture link. From this page, click Browse to find and select the picture on your computer. Put in an optional description, and then choose to make it your default icon. When you click the Upload new icon button, Elgg will upload the picture. Once the upload completes, Elgg will display the picture. Click the Save button to replace Elgg's default icon with this picture.   Elgg will automatically resize your picture to fit into its small area. You should use a close-up of yourself, otherwise the picture will lose clarity when resizing. If you don't like the picture when it appears on the website, or you want to replace it with a new one, simply tick the Delete check-box associated with the picture you don't like. When you click Save, Elgg will revert to the default question-mark guy.
Read more
  • 0
  • 0
  • 4695
Packt
03 Oct 2013
8 min read
Save for later

Quick start – using Foundation 4 components for your first website

Packt
03 Oct 2013
8 min read
(For more resources related to this topic, see here.)   Step 1 – using the Grid The base building block that Foundation 4 provides is the Grid. This component allows us to easily put the rest of elements in the page. The Grid also avoids the temptation of using tables to put elements in their places. Tables should be only used to show tabular data. Don't use them with any other meaning. Web design using tables is considered a really terrible practice. Defining a grid, intuitively, consists of defining rows and columns. There are basically two ways to do this, depending on which kind of layout you want to create. They are as follows: If you want a simple layout which evenly splits contents of the page, you should use Block Grid . To use Block Grid, we must have the default CSS package or be sure to have selected Block Grid from a custom package. If you want a more complex layout, with different sized elements and not necessarily evenly distributed, you should use the normal Grid. This normal Grid contains up to 12 grid columns, to put elements into. After picking the general layout of your page, you should decide if you want your grid structure to be the same for small devices, such as smartphones or tablets, as it will be on large devices, such as desktop computers or laptops. So, our first task is to define a grid structure for our page as follows: Select how we want to distribute our elements. We choose Block Grid, the simpler one. Consequently, we define a <ul> element with several <li> elements inside it. Select if we want different structure for large and small screens. We Yes .This is important to determine which Foundation 4 CSS class our elements will belong to. As result, we have the following code: <ul class="large-block-grid-4"> <li><img src ="demo1.jpg"></li> <li><img src ="demo2.jpg"></li> <li><img src ="demo3.jpg"></li> <li><img src ="demo4.jpg"></li> </ul> The key concept here is the class large-block-grid-4. There are two important classes related to Block Grid: Small-block-grid : If the element belongs to this class, the resulting Block Grid will keep its spacing and configuration for any screen size Large-block-grid : With this, the behavior will change between large and small screen sizes. So, large forces the responsive behavior. You can also use both classes together. In that case, large overrides the behavior of small class. The number 4 at the end of the class name is just the number of grid columns. The complete code of our page, so far is as follows: <!DOCTYPE html> <!--[if IE 8]><html class="no-js lt-ie9" lang="en" ><![endif]--> <!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]--> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" /> <title>My First Foundation Web</title> <link rel="stylesheet" href="css/foundation.css" /> <script src ="js/vendor/custom.modernizr.js"></script> <style> img { width: 300px; border: 1px solid #ddd; } </style> </head> <body> <!-- Grid --> <ul class="large-block-grid-4"> <li><img src ="demo1.jpg"></li> <li><img src ="demo2.jpg"></li> <li><img src ="demo3.jpg"></li> <li><img src ="demo4.jpg"></li> </ul> <!-- end grid --> <script> document.write('<script src =' + ('__proto__' in {} ? 'js/vendor/zepto' : 'js/vendor/jquery') + .js><\/script>') </script> <script src ="js/foundation.min.js"></script> <script> $(document).foundation(); </script> </body> </html> We have created a simple HTML file with a basic grid that contains 4 images in a list, using the Block Grid facility. Each image spans 4 grid columns. The following screenshot shows how our page looks: Not very fancy, uh? Don't worry, we will add some nice features in the following steps. We have way more options to choose from, for the grid arrangements of our pages. Visit http://foundation.zurb.com/docs/components/grid.html for more information. Step 2 – the navigation bar Adding a basic top navigation bar to our website with Foundation 4 is really easy. We just follow steps: Create an HTML nav element by inserting the following code: <nav class="top-bar"></nav> Add a title for the navigation bar (optional) by inserting the following code: <nav class="top-bar"> <ul class="title-area"> <li class="name"> <h1><a href="#">My First Foundation Web</h1> </li> <li class="toggle-topbar"> <a href="#"><span>Menu</span></a> </li> </ul> </nav> Add some navigation elements inside the nav element <nav class="top-bar"> <!--Title area --> <ul class="title-area"> <li class="name"> <h1><a href="#">My First Foundation Web</h1> </li> <li class="toggle-topbar"> <a href="#"><span>Menu</span></a> </li> </ul> <!-- Here starts nav Section --> <section class="top-bar-section"> <!-- Left Nav Section --> <ul class="left"> <li class="divider"></li> <li class="has-dropdown"><a href="#">Options</a> <!-- First submenu --> <ul class="dropdown"> <li class="has-dropdown"><a href="#">Option 1a</a> <!-- Second submenu --> <ul class="dropdown"> <li><label>2nd Options list</label></li> <li><a href="#">Option 2a</a></li> <li><a href="#">Option 2b</a></li> <li class="has-dropdown"> <a href="#">Option 2c</a> <!-- Third submenu --> <ul class="dropdown"> <li><label>3rd Options list</label></li> <li><a href="#">Option 3a</a></li> <li><a href="#">Option 3b</a></li> <li><a href="#">Option 3c</a></li> </ul> </li> </ul> </li> <!-- Visual separation between elements --> <li class="divider"></li> <li><a href="#">Option 2b</a></li> <li><a href="#">Option 2c</a></li> </ul> </li> <li class="divider"></li> </ul> </section> </nav> Interesting parts in the preceding code are as follows: <li class="divider">: It creates a visual separation between the elements of a list <li class="has-dropdown">: It shows a drop-down element when is clicked. <ul class="dropdown">: It indicates that the list is a drop-down menu. Apart from that, the left class, used to specify where we want the buttons on the bar. We would use right class to put them on the right side of the screen, or both classes, if we want several buttons in different places. Our navigation bar looks like the following screenshot: Of course, this navigation bar shows responsive behavior, thanks to the class toggle-topbar-menu-icon. This class forces the buttons to collapse in narrower screens. And it looks like the following screenshot: Now we know how to add a top navigation bar to our page. We just add the navigation bar's code before the grid, and the result is shown in the following screenshot: Summary In this article we learnt how to use 2 of the many UI elements provided to us by Foundation 4, the grid and the navigation bar. The screenshots provided help in giving a better idea of how the elements look, when incorporated in our website. Resources for Article : Further resources on this subject: HTML5: Generic Containers [Article] Building HTML5 Pages from Scratch [Article] Video conversion into the required HTML5 Video playback [Article]
Read more
  • 0
  • 0
  • 4659

article-image-introducing-sailsjs
Packt
15 Jan 2016
5 min read
Save for later

Introducing Sails.js

Packt
15 Jan 2016
5 min read
In this article by Shahid Shaikh, author of the book Sails.js Essentials, you will learn a few basics of Sails.js. Sails.js is modern production-ready Node.js framework to develop Node.js web application by following the MVC pattern. If you are not looking to reinvent wheel as we do in MongoDB, ExpressJS, AngularJS, and NodeJS (MEAN) stack and focus on business logic, then Sails.js is the answer. Sails.js is a powerful enterprise-ready framework, you can write your business logic and deploy the application with the surety that the application won't fail due to other factors. Sails.js uses Express.js as a web framework and Socket.io to handle the web socket messages. It is integrated and coupled in the Sails.js, therefore, you don't need to install and configure them separately. Sails.js also has all the popular database supports such as MySQL, MongoDB, PostgreSQL, and so on. It also comes up with autogenerated API feature that let's you create API on the go. (For more resources related to this topic, see here.) Brief about MVC We know that Model-View-Controller (MVC) is a software architecture pattern coined by Smalltalk engineers. MVC separates the application in three internal components and data is passed via each component. Each component is responsible for their task and they pass their result to the next component. This separation provides a great opportunity of code reusability and loose coupling. MVC components The following are the components of MVC architecture: Model The main component of MVC is model. Model represents knowledge. It could be a single object or nested structure of objects. The model directly manages the data (store the data as well), logic, and rules of the application. View View is the visual representation of model. View takes data from the model and presents it (in browser or console and so on). View gets updated as soon as the model is changed. An ideal MVC application must have a system to notify other components about the changes. In a web application, view is the HTML Embedded JavaScript (EJS) pages that we present in a web browser. Controller As the name implies, task of a controller is to accept input and convert it into a proper command for model or view. Controller can send commands to the model to make changes or update the state and it can also send commands to the view to update the information. For example, consider Google Docs as an MVC application, View will be the screen where you type. As you can see in the defined interval, it automatically saves the information in the Google server, which is a controller that notifies the model (Google backend server) to update the changes. Installing Sails.js Make sure that you have the latest version of Node.js and npm installed in your system before installing Sails.js. You can install it using the following command: npm install -g sails You can then use Sails.js as a command-line tool, as shown in the following: Creating new project You can create a new project using Sails.js command-line tool. The command is as follows: sails create appName Once the project is created, you need to install the dependencies that are needed by it. You can do so by running the following command: npm install Adding database support Sails.js provides object-relational mapping (ORM) drivers for widely-used databases such as MySQL, MongoDB, and PostgreSQL. In order to add these to your project, you need to install the respective packages such as sails-mysql for MySQL, sails-mongo for MongoDB, and so on. Once it is added, you need to change the connections.js and models.js file located in the /config folder. Add the connection parameters such as host, user, password, and database name in the connections.js file. For example, for MySQL, add the following: module.exports.connections = {   mysqlAdapter: {     adapter: 'sails-mysql',     host: 'localhost,     user: 'root',     password: '',     database: 'sampleDB'   } }; In the models.js file, change the connection parameter to this one. Here is a snippet for that: module.exports.models = {   connection: 'mysqlAdapter' }; Now, Sails.js will communicate with MySQL using this connection. Adding grunt task Sails.js uses grunt as a default task builder and provides effective way to add or remove existing tasks. If you take a look at the tasks folder in the Sails.js project directory, you will see that there is the config and register folder, which holds the task and registers the task with a grunt runner. In order to add a new task, you can create new file in the /config folder and add the grunt task using the following snippet as default: module.exports = function(grunt) {   // Your grunt task code }; Once done, you can register it with the default task runner or create a new file in the /register folder and add this task using the following code: module.exports = function (grunt) {   grunt.registerTask('task-name', [ 'your custom task']); }; Run this code using grunt <task-name>. Summary In this article, you learned that you can develop rich a web application very effectively with Sails.js as you don't have to do a lot of extra work for configuration and set up. Sails.js also provides autogenerated REST API and built-in WebSocket integration in each routes, which will help you to develop the real-time application in an easy way. Resources for Article: Further resources on this subject: Using Socket.IO and Express together [article] Parallel Programming Patterns [article] Planning Your Site in Adobe Muse [article]
Read more
  • 0
  • 0
  • 4655
Modal Close icon
Modal Close icon