Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-chrome-custom-tabs
Packt
17 Feb 2016
16 min read
Save for later

Chrome Custom Tabs

Packt
17 Feb 2016
16 min read
Well, most of us know tabs from every day Internet browsing. It doesn't really matter which browser you use; all browsers support tabs and multiple tabs' browsing. This allows us to have more than one website open at the same time and navigate between the opened instances. In Android, things are much the same, but when using WebView, you don't have tabs. This article will give highlights about WebView and the new feature of Android 6, Chrome custom tabs. (For more resources related to this topic, see here.) What is WebView? WebView is the part in the Android OS that's responsible for rendering web pages in most Android apps. If you see web content in an Android app, chances are you're looking at WebView. The major exceptions to this rule are some of the Android browsers, such as Chrome, Firefox, and so on. In Android 4.3 and lower, WebView uses code based on Apple's Webkit. In Android 4.4 and higher, WebView is based on the Chromium project, which is the open source base of Google Chrome. In Android 5.0, WebView was decoupled into a separate app that allowed timely updates through Google Play without requiring firmware updates to be issued, and the same technique was used with Google Play services. Now, let's talk again about a simple scenario: we want to display web content (URL-related) in our application. We have two options: either launch a browser or build our own in-app browser using WebView. Both options have trade-offs or disadvantages if we write them down. A browser is an external application and you can't really change its UI; while using it, you push the users to other apps and you may lose them in the wild. On the other hand, using WebView will keep the users tightly inside. However, actually dealing with all possible actions in WebView is quite an overhead. Google heard our rant and came to the rescue with Chrome custom tabs. Now we have better control over the web content in our application, and we can stitch web content into our app in a cleaner, prettier manner. Customization options Chrome custom tabs allow several modifications and tweaks: The toolbar color Enter and exit animations Custom actions for the toolbar and overflow menu Prestarted and prefetched content for faster loading When to use Chrome custom tabs Ever since WebView came out, applications have been using it in multiple ways, embedding content—local static content inside the APK and dynamic content as loading web pages that were not designed for mobile devices at the beginning. Later on we saw the rise of the mobile web era complete with hybrid applications). Chrome custom tabs are a bit more than just loading local content or mobile-compatible web content. They should be used when you load web data and want to allow simple implementation and easier code maintenance and, furthermore, make the web content part of your application—as if it's always there within your app. Among the reasons why you should use custom tabs are the following: Easy implementation: you use the support library when required or just add extras to your View intent. It's that simple. In app UI modifications, you can do the following: Set the toolbar color Add/change the action button Add custom menu items to the overflow menu Set and create custom in/out animations when entering the tab or exiting to the previous location Easier navigation and navigation logic: you can get a callback notifying you about an external navigation, if required. You know when the user navigates to web content and where they should return when done. Chrome custom tabs allow added performance optimizations that you can use: You can keep the engine running, so to speak, and actually give the custom tab a head start to start itself and do some warm up prior to using it. This is done without interfering or taking away precious application resources. You can provide a URL to load in advance in the background while waiting for other user interactions. This speeds up the user-visible page loading time and gives the user a sense of blazing fast application where all the content is just a click away. While using the custom tab, the application won't be evicted as the application level will still be in the foreground even though the tab is on top of it. So, we remain at the top level for the entire usage time (unless a phone call or some other user interaction leads to a change). Using the same Chrome container means that users are already signed in to sites they connected to in the past; specific permissions that were granted previously apply here as well; even fill data, autocomplete, and sync work here. Chrome custom tabs allow us give the users the latest browser implementation on pre-Lollipop devices where WebView is not the latest version. The implementation guide As discussed earlier, we have a couple of features integrated into Chrome custom tabs. The first customizes the UI and interaction with the custom tabs. The second allows pages to be loaded faster and keeps the application alive. Can we use Chrome custom tabs? Before we start using custom tabs, we want to make sure they're supported. Chrome custom tabs expose a service, so the best check for support is to try and bind to the service. Success means that custom tabs are supported and can be used. You can check out this gist, which shows a helper how to to check it, or check the project source code later on at https://gist.github.com/MaTriXy/5775cb0ff98216b2a99d. After checking and learning that support exists, we will start with the UI and interaction part. Custom UI and tab interaction Here, we will use the well-known ACTION_VIEW intent action, and by appending extras to the intent sent to Chrome, we will trigger changes in the UI. Remember that the ACTION_VIEW intent is compatible with all browsers, including Chrome. There are some phones without Chrome out there, or there are instances where the device's default browser isn't Chrome. In these cases, the user will navigate to the specific browser application. Intent is a convenient way to pass that extra data we want Chrome to get. Don't use any of these flags when calling to the Chrome custom tabs: FLAG_ACTIVITY_NEW_TASK FLAG_ACTIVITY_NEW_DOCUMENT Before using the API, we need to add it to our gradle file: compile 'com.android.support:customtabs:23.1.0' This will allow us to use the custom tab support library in our application: CustomTabsIntent.EXTRA_SESSION The preceding code is an extra from the custom tabs support library; it's used to match the session. It must be included in the intent when opening a custom tab. It can be null if there is no need to match any service-side sessions with the intent. We have a sample project to show the options for the UI called ChubbyTabby at https://github.com/MaTriXy/ChubbyTabby. We will go over the important parts here as well. Our main interaction comes from a special builder from the support library called CustomTabsIntent.Builder; this class will help us build the intent we need for the custom tab: CustomTabsIntent.Builder intentBuilder = new CustomTabsIntent.Builder(); //init our Builder //Setting Toolbar Color int color = getResources().getColor(R.color.primary); //we use primary color for our toolbar as well - you can define any color you want and use it. intentBuilder.setToolbarColor(color); //Enabling Title showing intentBuilder.setShowTitle(true); //this will show the title in the custom tab along the url showing at the bottom part of the tab toolbar. //This part is adding custom actions to the over flow menu String menuItemTitle = getString(R.string.menu_title_share); PendingIntent menuItemPendingIntent = createPendingShareIntent(); intentBuilder.addMenuItem(menuItemTitle, menuItemPendingIntent); String menuItemEmailTitle = getString(R.string.menu_title_email); PendingIntent menuItemPendingIntentTwo = createPendingEmailIntent(); intentBuilder.addMenuItem(menuItemEmailTitle, menuItemPendingIntentTwo); //Setting custom Close Icon. intentBuilder.setCloseButtonIcon(mCloseButtonBitmap); //Adding custom icon with custom action for the share action. intentBuilder.setActionButton(mActionButtonBitmap, getString(R.string.menu_title_share), createPendingShareIntent()); //Setting start and exit animation for the custom tab. intentBuilder.setStartAnimations(this, R.anim.slide_in_right, R.anim.slide_out_left); intentBuilder.setExitAnimations(this, android.R.anim.slide_in_left, android.R.anim.slide_out_right); CustomTabActivityHelper.openCustomTab(this, intentBuilder.build(), Uri.parse(URL), new WebviewFallback(), useCustom);  A few things to notice here are as follows: Every menu item uses a pending intent; if you don't know what a pending intent is, head to http://developer.android.com/reference/android/app/PendingIntent.html When we set custom icons, such as close buttons or an action button, for that matter, we use bitmaps and we must decode the bitmap prior to passing it to the builder Setting animations is easy and you can use animations' XML files that you created previously; just make sure that you test the result before releasing the app The following screenshot is an example of a Chrome custom UI and tab: The custom action button As developers, we have full control over the action buttons presented in our custom tab. For most use cases, we can think of a share action or maybe a more common option that your users will perform. The action button is basically a bundle with an icon of the action button and a pending intent that will be called by Chrome when your user hits the action button. The icon should be 24 dp in height and 24-48 dp in width according to specifications: //Adding custom icon with custom action for the share action intentBuilder.setActionButton(mActionButtonBitmap, getString(R.string.menu_title_share), createPendingShareIntent()); Configuring a custom menu By default, Chrome custom tabs usually have a three-icon row with Forward, Page Info, and Refresh on top at all times and Find in page and Open in Browser (Open in Chrome can appear as well) at the footer of the menu. We, developers, have the ability to add and customize up to three menu items that will appear between the icon row and foot items as shown in the following screenshot: The menu we see is actually represented by an array of bundles, each with menu text and a pending intent that Chrome will call on your behalf when the user taps the item: //This part is adding custom buttons to the over flow menu String menuItemTitle = getString(R.string.menu_title_share); PendingIntent menuItemPendingIntent = createPendingShareIntent(); intentBuilder.addMenuItem(menuItemTitle, menuItemPendingIntent); String menuItemEmailTitle = getString(R.string.menu_title_email); PendingIntent menuItemPendingIntentTwo = createPendingEmailIntent(); intentBuilder.addMenuItem(menuItemEmailTitle, menuItemPendingIntentTwo); Configuring custom enter and exit animations Nothing is complete without a few animations to tag along. This is no different, as we have two transitions to make: one for the custom tab to enter and another for its exit; we have the option to set a specific animation for each start and exit animation: //Setting start and exit animation for the custom tab. intentBuilder.setStartAnimations(this,R.anim.slide_in_right, R.anim.slide_out_left); intentBuilder.setExitAnimations(this, android.R.anim.slide_in_left, android.R.anim.slide_out_right); Chrome warm-up Normally, after we finish setting up the intent with the intent builder, we should call CustomTabsIntent.launchUrl (Activity context, Uri url), which is a nonstatic method that will trigger a new custom tab activity to load the URL and show it in the custom tab. This can take up quite some time and impact the impression of smoothness the app provides. We all know that users demand a near-instantaneous experience, so Chrome has a service that we can connect to and ask it to warm up the browser and its native components. Calling this will ask Chrome to perform the following: The DNS preresolution of the URL's main domain The DNS preresolution of the most likely subresources Preconnection to the destination, including HTTPS/TLS negotiation The process to warm up Chrome is as follows: Connect to the service. Attach a navigation callback to get notified upon finishing the page load. On the service, call warmup to start Chrome behind the scenes. Create newSession; this session is used for all requests to the API. Tell Chrome which pages the user is likely to load with mayLaunchUrl. Launch the intent with the session ID generated in step 4. Connecting to the Chrome service Connecting to the Chrome service involves dealing with Android Interface Definition Language (AIDL). If you don't know about AIDL, read http://developer.android.com/guide/components/aidl.html. The interface is created with AIDL, and it automatically creates a proxy service class for you: CustomTabsClient.bindCustomTabsService() So, we check for the Chrome package name; in our sample project, we have a special method to check whether Chrome is present in all variations. After we set the package, we bind to the service and get a CustomTabsClient object that we can use until we're disconnected from the service: pkgName - This is one of several options checking to see if we have a version of Chrome installed can be one of the following static final String STABLE_PACKAGE = "com.android.chrome"; static final String BETA_PACKAGE = "com.chrome.beta"; static final String DEV_PACKAGE = "com.chrome.dev"; static final String LOCAL_PACKAGE = "com.google.android.apps.chrome"; private CustomTabsClient mClient; // Binds to the service. CustomTabsClient.bindCustomTabsService(myContext, pkgName, new CustomTabsServiceConnection() { @Override public void onCustomTabsServiceConnected(ComponentName name, CustomTabsClient client) { // CustomTabsClient should now be valid to use mClient = client; } @Override public void onServiceDisconnected(ComponentName name) { // CustomTabsClient is no longer valid which also invalidates sessions. mClient = null; } });  After we bind to the service, we can call the proper methods we need. Warming up the browser process The method for this is as follows: boolean CustomTabsClient.warmup(long flags) //With our valid client earlier we call the warmup method. mClient.warmup(0); Flags are currently not being used, so we pass 0 for now. The warm-up procedure loads native libraries and the browser process required to support custom tab browsing later on. This is asynchronous, and the return value indicates whether the request has been accepted or not. It returns true to indicate success. Creating a new tab session The method for this is as follows: boolean CustomTabsClient.newSession(ICustomTabsCallback callback) The new tab session is used as the grouping object tying the mayLaunchUrl call, the VIEW intent that we build, and the tab generated altogether. We can get a callback associated with the created session that would be passed for any consecutive mayLaunchUrl calls. This method returns CustomTabsSession when a session is created successfully; otherwise, it returns Null. Setting the prefetching URL The method for this is as follows: boolean CustomTabsSession.mayLaunchUrl (Uri url, Bundle extras, List<Bundle> otherLikelyBundles) This method will notify the browser that a navigation to this URL will happen soon. Make sure that you call warmup() prior to calling this method—this is a must. The most likely URL has to be specified first, and you can send an optional list of other likely URLs (otherLikelyBundles). Lists have to be sorted in a descending order and the optional list may be ignored. A new call to this method will lower the priority of previous calls and can result in URLs not being prefetched. Boolean values inform us whether the operation has been completed successfully. Custom tabs connection callback The method for this is as follows: void CustomTabsCallback.onNavigationEvent (int navigationEvent, Bundle extras) We have a callback triggered upon each navigation event in the custom tab. The int navigationEvent element is one of the six that defines the state the page is in. Refer to the following code for more information: //Sent when the tab has started loading a page. public static final int NAVIGATION_STARTED = 1; //Sent when the tab has finished loading a page. public static final int NAVIGATION_FINISHED = 2; //Sent when the tab couldn't finish loading due to a failure. public static final int NAVIGATION_FAILED = 3; //Sent when loading was aborted by a user action. public static final int NAVIGATION_ABORTED = 4; //Sent when the tab becomes visible. public static final int TAB_SHOWN = 5; //Sent when the tab becomes hidden. public static final int TAB_HIDDEN = 6; private static class NavigationCallback extends CustomTabsCallback { @Override public void onNavigationEvent(int navigationEvent, Bundle extras) { Log.i(TAG, "onNavigationEvent: Code = " + navigationEvent); } } Summary In this article, we learned about a newly added feature, Chrome custom tabs, which allows us to embed web content into our application and modify the UI. Chrome custom tabs allow us to provide a fuller, faster in-app web experience for our users. We use the Chrome engine under the hood, which allows faster loading than regular WebViews or loading the entire Chrome (or another browser) application. We saw that we can preload pages in the background, making it appear as if our data is blazing fast. We can customize the look and feel of our Chrome tab so that it matches our app. Among the changes we saw were the toolbar color, transition animations, and even the addition of custom actions to the toolbar. Custom tabs also benefit from Chrome features such as saved passwords, autofill, tap to search, and sync; these are all available within a custom tab. For developers, integration is quite easy and requires only a few extra lines of code in the basic level. The support library helps with more complex integration, if required. This is a Chrome feature, which means you get it on any Android device where the latest versions of Chrome are installed. Remember that the Chrome custom tab support library changes with new features and fixes, which is the same as other support libraries, so please update your version and make sure that you use the latest API to avoid any issues. To learn more about Chrome custom tabs and Android 6, refer to the following books: Android 6 Essentials (https://www.packtpub.com/application-development/android-6-essentials) Augmented Reality for Android Application Development (https://www.packtpub.com/application-development/augmented-reality-android-application-development) Resources for Article: Further resources on this subject: Android and iOS Apps Testing at a Glance [Article] Working with Xamarin.Android [Article] Mobile Phone Forensics – A First Step into Android Forensics [Article]
Read more
  • 0
  • 0
  • 5817

article-image-building-android-must-know
Packt
13 Sep 2013
14 min read
Save for later

Building Android (Must know)

Packt
13 Sep 2013
14 min read
(For more resources related to this topic, see here.) Getting ready You need Ubuntu 10.04 LTS or later (Mac OS X is also supported by the build system, but we will be using Ubuntu for this article). This is the supported build operating system, and the one for which you will get the most help from the online community. In my examples, I use Ubuntu 11.04, which is also reasonably well supported. You need approximately 6 GB of free space for the Android code files. For a complete build, you need 25 GB of free space. If you are using Linux in a virtual machine, make sure the RAM or the swap size is at least 16 GB, and you have 30 GB of disk space to complete the build. As of Android Versions 2.3 (Gingerbread) and later, building the system is only possible on 64-bit computers. Using 32-bit machines is still possible if you work with Froyo (Android 2.2). However, you can still build later versions on a 32-bit computer using a few "hacks" on the build scripts that I will describe later. The following steps outline the process needed to set up a build environment and compile the Android framework and kernel: Setting up a build environment Downloading the Android framework sources Building the Android framework Building a custom kernel In general, your (Ubuntu Linux) build computer needs the following: Git 1.7 or newer (GIT is a source code management tool), JDK 6 to build Gingerbread and later versions, or JDK 5 to build Froyo and older versions Python 2.5 – 2.7 GNU Make 3.81 – 3.82 How to do it... We will first set up the build environment with the help of the following steps: All of the following steps are targeted towards 64-bit Ubuntu. Install the required JDK by executing the following command: JDK6sudo add-apt-repository "deb http: //archive.canonical.com/ lucid partner" sudo apt-get update sudo apt-get install sun-java6-jdkJDK5sudo add-apt-repository "deb http: //archive.ubuntu.com/ubuntu hardy main multiverse" sudo add-apt-repository "deb http: //archive.ubuntu.com/ubuntu hardy-updates main multiverse" sudo apt-get update sudo apt-get install sun-java5-jdk Install the required library dependencies: sudo apt-get install git-core gnupg flex bison gperf build-essential zip curl zlib1g-dev libc6-dev lib32ncurses5-dev ia32-libs x11proto-core-dev libx11-dev lib32readline5-dev lib32z-dev libgl1-mesa-dev g++-multilib mingw32 tofrodos python-markdown libxml2-utils xsltproc [OPTIONAL]. On Ubuntu 10.10, a symlink is not created between libGL.so.1 and libGL.so, which sometimes causes the build process to fail: sudo ln -s /usr/lib32/mesa/libGL.so.1 /usr/lib32/mesa/libGL.so [OPTIONAL] On Ubuntu 11.10, an extra dependency is sudo apt-get install libx11-dev:i386 Now, we will download the Android sources from Google's repository. Install repo. Make sure you have a /bin directory and that it exists in your PATH variable: mkdir ~/bin PATH=~/bin:$PATH curl https: //dl-ssl.google.com/dl/googlesource/git-repo/repo > ~/bin/repo chmod a+x ~/bin/repo Repo is a python script used to download the Android sources, among other tasks. It is designed to work on top of GIT. Initialize repo. In this step, you need to decide the branch of the Android source you wish to download. If you wish to make use of Gerrit, which is the source code reviewing tool used, make sure you have a live Google mail address. You will be prompted to use this e-mail address when repo initializes. Create a working directory on your local machine. We will call this mkdir android_srccd android_src The following command will initialize repo to download the "master" branch: repo init -u https://android.googlesource.com/platform/manifest The following command will initialize repo to download the Gingerbread 2.3.4 branch: repo init -u https: //android.googlesource.com/platform/manifest -b android-2.3.4_r1 The -b switch is used to specify the branch you wish to download. Once repo is configured, we are ready to obtain the source files. The format of the command is as follows: repo sync -jX -jX is optional, and is used for parallel fetch. The following command will sync all the necessary source files for the Android framework. Note that these steps are only to download the Android framework files.Kernel download is a separate process. repo sync -j16 The source code access is anonymous, that is, you do not need to be registered with Google to be able to download the source code. The servers allocate a fixed quota to each IP address that accesses the source code. This is to protect the servers against excessive download traffic. If you happen to be behind a NAT and share an IP address with others, who also wish to download the code, you may encounter error messages from the source code servers warning about excessive usage. In this case, you can solve the problem with authenticated access. In this method, you get a separate quota based on your user ID, generated by the password generator system. The password generator and associated instructions are available at https://android.googlesource.com/new-password. Once you have obtained a user ID/password and set up your system appropriately, you can force authentication by using the following command: repo init -u https://android.googlesource.com/a/platform/manifest Notice the /a/ in the URI. This indicates authenticated access. Proxy issues If you are downloading from behind a proxy, set the following environment variables: export HTTP_PROXY=http://<proxy_user_id>:<proxy_password>@<proxy_server>:<proxy_port>export HTTPS_PROXY=http://<proxy_user_id>:<proxy_password>@<proxy_server>:<proxy_port> Next, we describe the steps needed to build the Android framework sources: Initialize the terminal environment. Certain build-time tools need to be included in your current terminal environment. So, navigate to your source directory: cd android_src/source build/envsetup.sh The sources can be built for various targets. Each target descriptor has the BUILD-BUILDTYPE format: BUILD: Refers to a specific combination of the source code for a certain device. For example, full_maguro targets Galaxy Nexus or generic targets the emulator. BUILDTYPE: This can be one of the following three values: user: Suitable for production builds userdebug: Similar to user, with with root access in ADB for easier debugging eng: Development build only We will be building for the emulator in our current example. Issue the following command to do so: lunch full-eng To actually build the code, we will use make. The format is as follows: make -jX Where X indicates the number of parallel builds. The usual rule is: X is the number of CPU cores + 2. This is an experimental formula, and the reader should feel free to test it with different values. To build the code: make -j6 Now, we must wait till the build is complete. Depending on your system's specifications, this can take anywhere between 20 minutes and 1 hour. At the end of a successful build, the output looks similar to the following (note that this may vary depending on your target): ...target Dex: SystemUI Copying: out/target/common/obj/APPS/SystemUI_intermediates/noproguard.classes.dex target Package: SystemUI (out/target/product/generic/obj/APPS/SystemUI_intermediates/package.apk) 'out/target/common/obj/APPS/SystemUI_intermediates//classes.dex' as 'classes.dex'... Install: out/target/product/generic/system/app/SystemUI.apk Finding NOTICE files: out/target/product/generic/obj/NOTICE_FILES/hash-timestamp Combining NOTICE files: out/target/product/generic/obj/NOTICE.html Target system fs image: out/target/product/generic/obj/PACKAGING/systemimage_intermediates/system.img Install system fs image: out/target/product/generic/system.img Installed file list: out/target/product/generic/installed-files.txt DroidDoc took 440 sec. to write docs to out/target/common/docs/doc-comment-check A better check for a successful build is to examine the newly created files inside the following directory. The build produces a few main files inside android_src/out/target/product/<DEVICE>/, which are as follows: system.img: The system image file boot.img: Contains the kernel recovery.img: Contains code for recovery partition of the device In the case of an emulator build, the preceding files will appear at android_src/out/target/product/generic/. Now, we can test our build simply by issuing the emulator command: emulator This launches an Android emulator, as shown in the following screenshot, running the code we've just built: The code we've downloaded contains prebuilt Linux kernels for each supported target. If you only wish to change the framework files, you can use the prebuilt kernels, which are automatically included in the build images. If you are making specific changes to the kernel, you will have to obtain a specific kernel and build it separately (shown here), which is explained later: Faster Builds – CCACHE The framework code contains C language and Java code. The majority of the C language code exists as shared objects that are built during the build process. If you issue the make clean command, which deletes all the built code (simply deleting the build output directory has the same effect as well) and then rebuild, it will take a significant amount of time. If no changes were made to these shared libraries, the build time can be sped up with CCACHE, which is a compiler cache. In the root of the source directory android_src/, use the following command: export USE_CCACHE=1export CCACHE_DIR=<PATH_TO_YOUR_CCACHE_DIR> To set a cache size: prebuilt/linux-x86/ccache/ccache -M 50G This reserves a cache size of 50 GB. To watch how the cache is used during the build process, use the following command (navigate to your source directory in another terminal): watch -n1 -d prebuilt/linux-x86/ccache/ccache -s In this part, we will obtain the sources and build the goldfish emulator kernel. Building kernels for devices is done in a similar way. goldfish is the name of the kernel modified for the Android QEMU-based emulator. Get the kernel sources: Create a subdirectory of android_src: mkdir kernel_codecd kernel_codegit clone https: //android.googlesource.com/kernel/goldfish.gitgit branch -r This will clone goldfish.git into a folder named goldfish (created automatically) and then list the remote branches available. The output should look like the following (this is seen after the execution of the git branch): origin/HEAD -> origin/master origin/android-goldfish-2.6.29 origin/linux-goldfish-3.0-wip origin/master Here, in the following command, we notice origin/android-goldfish-2.6.29, which is the kernel we wish to obtain: cd goldfishgit checkout --track -b android-goldfish-2.6.29 origin/android-goldfish-2.6.29 This will obtain the kernel code: Set up the build environment. We need to initialize the terminal environment by updating the system PATH variable to point to a cross compiler which will be used to compile the Linux kernel. This cross compiler is already available as a prebuilt binary distributed with the Android framework sources: export PATH=<PATH_TO_YOUR_ANDROID_SRC_DIR>/prebuilt/linux-x86/toolchain/arm-eabi-4.4.3/bin:$PATH Run an emulator (you may choose to run the emulator with the system image that we just built earlier. We need this to obtain the kernel configuration file. Instead of manually configuring it, we choose to pull the config file of a running kernel. Make sure ADB is still in your path. It will be in your PATH variable if you haven't closed the terminal window since building the framework code, otherwise execute the following steps sequentially. (Note that you have to change directory to ANDROID_SRC to execute the following command). source build/envsetup.shlunch full_engadb pull /proc/config.gzgunzip config.gz cp config .config The preceding command will copy the config file of the running kernel into our kernel build tree. Start the compilation process: export ARCH=armexport SUBARCH=arm make If the following comes up: Misc devices (MISC_DEVICES) [Y/n/?] y Android pmem allocator (ANDROID_PMEM) [Y/n] y Enclosure Services (ENCLOSURE_SERVICES) [N/y/?] n Kernel Debugger Core (KERNEL_DEBUGGER_CORE) [N/y/?] n UID based statistics tracking exported to /proc/uid_stat (UID_STAT) [N/y] n Virtual Device for QEMU tracing (QEMU_TRACE) [Y/n/?] y Virtual Device for QEMU pipes (QEMU_PIPE) [N/y/?] (NEW) Enter y as the answer. This is some additional Android-specific configuration needed for the build. Now we have to wait till the build is complete. The final lines of the build output should look like the following (note that this can change depending on your target): ... LD vmlinux SYSMAP System.map SYSMAP .tmp_System.map OBJCOPY arch/arm/boot/Image Kernel: arch/arm/boot/Image is ready AS arch/arm/boot/compressed/head.o GZIP arch/arm/boot/compressed/piggy.gz AS arch/arm/boot/compressed/piggy.o CC arch/arm/boot/compressed/misc.o LD arch/arm/boot/compressed/vmlinux OBJCOPY arch/arm/boot/zImage Kernel: arch/arm/boot/zImage is ready As the last line states, the new zImage is available inside arch/arm/ boot/. To test it, we boot the emulator with this newly built image. Copy zImage to an appropriate directory. I just copied it to android_src/: emulator -kernel zImage To verify that the emulator is indeed running our kernel, use the following command: adb shell # cat /proc/version The output will look like: Linux version 2.6.29-gef9c64a (earlence@earlence-Satellite-L650) (gcc version 4.4.3 (GCC) ) #1 Mon Jun 4 16:35:00 CEST 2012 This is our custom kernel, since we observe the custom build string (earlence@earlence-Satellite-L650) present as well as the time of the compilation. The build string will be the name of your computer. Once the emulator has booted up, you will see a window similar to the following: Following are the steps required to build the framework on a 32-bit system: Make the following simple changes to build Gingerbread on 32-bit Ubuntu. Note that these steps assume that you have set up the system for a Froyo build. Assuming a Froyo build computer setup, the following steps guide you on incrementally making changes such that Gingerbread and later builds are possible. To set up for Froyo, please follow the steps explained at http://source.android.com/source/initializing.html. In build/core/main.mk, change ifneq (64,$(findstring 64,$(build_arch))) to ifneq (i686,$(findstring i686,$(build_arch))). Note that there are two changes on that line. In the following files: external/clearsilver/cgi/Android.mk external/clearsilver/java-jni/Android.mk external/clearsilver/util/Android.mk external/clearsilver/cs/Android.mk change:LOCAL_CFLAGS += -m64 LOCAL_LDFLAGS += -m64 to:LOCAL_CFLAGS += -m32 LOCAL_LDFLAGS += -m32 Install the following packages (in addition to the packages you must have installed for the Froyo build): sudo apt-get install lib64z1-dev libc6-dev-amd64 g++-multilib lib64stdc++6 Install Java 1.6 using the following command: sudo apt-get install sun-java6-jdk Summary The Android build system is a combination of several standard tools and custom wrappers. Repo is one such wrapper script that takes care of GIT operations and makes it easier for us to work with the Android sources. The kernel trees are maintained separately from the framework source trees. Hence, if you need to make customizations to a particular kernel, you will have to download and build it separately. The keen reader may be wondering how we are able to run the emulator if we never built a kernel in when we just compiled the framework. Android framework sources include prebuilt binaries for certain targets. These binaries are located in the /prebuilt directory under the framework source root directory. The kernel build process is more or less the same as building kernels for desktop systems. There are only a few Android-specific compilation switches, which we have shown to be easily configurable given an existing configuration file for the intended target. The sources consist of C/C++ and Java code. The framework does not include the kernel sources, as these are maintained in a separate GIT tree. In the next recipe, we will explain the framework code organization. It is important to understand how and where to make changes while developing custom builds. Resources for Article: Further resources on this subject: Android Native Application API [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] So, what is Spring for Android? [Article]  
Read more
  • 0
  • 0
  • 5814

article-image-debugging-multithreaded-applications-singlethreaded-c
Packt
15 Oct 2009
6 min read
Save for later

Debugging Multithreaded Applications as Singlethreaded in C#

Packt
15 Oct 2009
6 min read
We can identify threads created using both the BackgroundWorker component and the Thread class. We can also identify the main application thread and we learned about the information shown by the Threads window. However, we must debug the encryption process to solve its problem without taking into account the other concurrent threads. How can we successfully debug the encryption engine focusing on one thread and leaving the others untouched? We can use the Threads window to control the execution of the concurrent thread at runtime without having to make changes to the code. This will affect the performance results, but it will allow us to focus on a specific part of the code as if we were working in a single-threaded application. This technique is suitable for solving problems related to a specific part of the code that runs in a thread. However, when there are problems generated by concurrency we must use other debugging tricks that we will be learning shortly. The Threads window does a great job in offering good runtime information about the running threads while offering a simple way to watch, pause, and resume multiple threads. Time for action – Leaving a thread running alone You must run the encryption procedure called by ThreadEncryptProcedure. But you want to focus on just one thread, in order to solve the problem that the FBI agents detected. Changing the code is not an option, because it will take more time than expected, and you might introduce new bugs to the encryption engine. Thus, let's freeze the threads we are not interested in! Now, we are going to leave one encryption thread running alone to focus on its code without the other threads disturbing our debugging procedure: Stay in the project, SMSEncryption. Clear all the breakpoints. Press Ctrl + Shift + F9 or select Debug | Delete AllBreakpoints in the main menu. Make sure the Threads window is visible. Define a breakpoint in the line int liThreadNumber = (int)poThreadParameter; in the ThreadEncryptProcedure procedure code. Enter or copy and paste a long text, using the same lines (with more than 30,000 lines) in the Textbox labeled Original SMS Messages, as shown in the following image: Click on the Run in a thread button. The line with the breakpoint defined in the ThreadEncryptProcedure procedure is shown highlighted as the next statement that will be executed. The current thread will be shown with a yellow arrow on the left in the Threads window. Right-click on each of the other encryption threads and select Freeze in the context menu that appears, in order to suspend them. If the current thread is Encryption #1 and there are four cores available, you will freeze the following threads—Encryption #0, Encryption #2, and Encryption #3. Right-click on the Main thread and select Freeze in the context menu that appears, in order to suspend it (we do not want the BackgroundWorker to start and interfere with our work). The only working thread that matters will be Encryption #1, as shown in the following image: Run the code step-by-step inspecting values as you do with single-threaded applications. What just happened? It is easy to debug a multi hreaded application focusing on one thread instead of trying to do it with all the threads running at the same time. We could transform a complex multi threaded application into a single-threaded application without making changes to the code. We did it at runtime using the multithreading debugging features offered by the C# IDE. We suspended the execution of the concurrent threads that would disturb our step-by-step execution. Thus, we could focus on the code being executed by just one encryption thread. Freezing and thawing threads Freezing a thread suspends its execution. However, in the debugging process, we would need to resume the thread execution. It can be done at any point of ti me by right-clicking on a suspended thread and selecting Thaw in the context menu that appears, as shown in the following image: By Freezing and Thawing threads (suspending and resuming), we can have an exhaustive control over the threads running during the debugging process. It helps a lot when we have to solve bugs related to concurrency as we can easily analyze many contexts without making changes to the code—which could generate new bugs. Nevertheless, when developing multithreaded applications, we must always test the execution with many concurrent threads running to make sure it does not have concurrency bugs. The debugging techniques allow us to isolate the code for evaluation purposes, but the final tests must use the full multithreading potential. Viewing the call stack for each running thread Each thread has its own independent stack. Using the Call Stack window, we can move through the methods that were called, as we are used to doing so in single-threaded applications. The main difference in doing this with multithreaded applications is that when the active thread changes, the Call Stack window will also show different content. Debugging a multi hreaded application using the techniques we are learning is an excellent way to understand how the different threads run and will improve our parallel programming skills. To show the call stack for the active thread, press Ctrl + Alt + C or go to Debug | Windows | Call Stack in the main menu. Make sure the Threads window is also visible to take into account the active thread when analyzing the call stack, as shown in the following image: Have a go hero – Debugging and enhancing the encryption algorithm Using the multithreaded debugging techniques we have learned so far, develop a new version of this application with the encryption problem solved. Take into account everything we have studied about freezing and thawing threads. Check the randomly generated garbage and the way it is applied to the generated encrypted string. Making some changes to it, you can have a robust encryption process that differentiates each output with the same input text. You can improve the new versions by using new randomly generated garbage to enhance the encryption algorithms. Oh no! You have to explain to the agents the changes you made to the encryption procedure, and how it works.  
Read more
  • 0
  • 0
  • 5802

article-image-upgrading-microsoft-sure-step
Packt
24 Jan 2011
11 min read
Save for later

Upgrading with Microsoft Sure Step

Packt
24 Jan 2011
11 min read
  Microsoft Dynamics Sure Step 2010 The smart guide to the successful delivery of Microsoft Dynamics Business Solutions Learn how to effectively use Microsoft Dynamics Sure Step to implement the right Dynamics business solution with quality, on-time and on-budget results. Understand the review and optimization offerings available from Microsoft Dynamics Sure Step to further enhance your business solution delivery during and after go-live. Gain knowledge of the project and change management content provided in Microsoft Dynamics Sure Step. Familiarize yourself with the approach to adopting the Microsoft Dynamics Sure Step methodology as your own.        Upgrade assessment and the diagnostic phase In this section, we will discuss the process, particularly the Upgrade Assessment Decision Accelerator offering, in more detail. We begin by reintroducing the diagram showing the flow of activities and Decision Accelerator offerings for an existing customer. You may recall that the flow is very similar to the one for a prospect, with the only difference being the Upgrade Assessment DA offering replacing the Requirements and Process Review DA. (Move the mouse over the image to enlarge.) As noted before, the flow for the existing customer also begins with Diagnostic Preparation, similar to that for a prospect. The guidance in the activity page can be leveraged to explain/understand the capabilities and features of the new version of the corresponding Microsoft Dynamics solution that is being considered. When interest is established in moving the existing solution to the current version of the solution, the next step is the Upgrade Assessment DA offering, which is the key step in this process. The Upgrade Assessment Decision Accelerator offering The Upgrade Assessment DA is the most important step in the process for an existing Microsoft Dynamics customer. The Upgrade Assessment DA is executed by the services delivery team to get an understanding of the existing solution being used by the customer, determine the components that need to be upgraded to the current release of the product, and determine if any other features need to be enabled as part of the upgrade engagement. In combination with the Scoping Assessment DA offering, the delivery team will also determine the optimal approach, resource plan and estimate, and overall timeline to upgrade the solution to the current product version. Before initiating the Upgrade Assessment DA, the services delivery team should meet with the customer to ascertain and confirm that there is interest in performing the upgrade. Especially where delivery resources are in high demand, this is an important step that the sales teams need to carry out before involving the delivery resources such as solution architects and senior application consultants. Sales personnel can use the resources in the Sure Step Diagnostic Preparation activity to understand and position the current capabilities of the corresponding Microsoft Dynamics solution. Once customer interest in upgrading has been determined, the services delivery team can employ the Upgrade Assessment DA offering. The aim of the Upgrade Assessment is to identify the complexity of upgrading the existing solution and to highlight areas of feature enhancements, complexities, and risks. The steps performed in the execution of the Upgrade Assessment are shown in the following diagram. The delivery team begins the Upgrade Assessment by understanding the overall objectives for the Upgrade. Teams can leverage the product-specific questionnaires provided in Sure Step for Microsoft Dynamics AX, CRM, GP, NAV, and SL. These questionnaires also include specific sections and questions for interfaces, infrastructure, and so on, so they can also be leveraged in the following steps. One of the important tasks at the outset is to review the upgrade path for Microsoft Dynamics and any associated ISV software, to determine whether the upgrade from the customer's existing product version to the targeted version of Microsoft Dynamics is supported. This will have a bearing on how the upgrade can be executed—can you follow a supported upgrade path, or is it pretty much a full reimplementation of the solution? The next step in executing the Upgrade Assessment is to assess the existing solution's configurations and customizations. In this step, the delivery team reviews which features of Microsoft Dynamics have been enabled for the customer, including which ones have been configured to meet the customer's needs and which ones have been customized. This will allow the delivery team to take the overall objectives for the upgrade and determine which of these configurations and customizations will need to be ported over to the new solution, and which ones should be retired. For example, the older version may have necessitated customizations in areas where the solution did not have corresponding functionality. Or perhaps the solution needed a specific ISV solution to meet a need. If the current product version provides these features as standard functionality, these customizations or ISV solutions no longer need to be part of the new solution. The next Upgrade Assessment step is to examine the custom interfaces for the existing solution. This includes assessing any custom code written to interface the solution to third-party solutions, such as an external database for reporting purposes. This step is followed by reviewing the existing infrastructure and architecture configuration so that the delivery team can understand the hardware components that can be leveraged for the new solution. The delivery team can provide confirmation on whether the existing infrastructure can support the upgrade application or if additional infrastructure components may be necessary. The final step of the Upgrade Assessment DA offering is for the delivery team to complete the detailed analysis of the customer's existing solution and generate a report of their findings. The report, to be presented to the customer for approval, will include the following topics: The scope of the upgrade, including a list of functional and technical areas that will be enhanced in the new solution. A list of the functional areas of the application categorized to show the expected complexity involved in upgrading them. If there are areas of the existing implementation that will require further examination or additional effort to upgrade successfully due to the inherent complexity, they must be highlighted. Areas of the current solution that could be remapped to new functionality in the current version of the base Microsoft Dynamics product. An overall recommended approach to the upgrade, including alternatives to address any new functionality desired. The Upgrade Assessment provides the customer early identification of issues and risks that could occur during an upgrade so that appropriate mitigating actions can be initiated accordingly. The customer can also get a level of confidence that an appropriate level of project governance for the upgrade is available, as well as that the correct upgrade approach will be undertaken by the delivery team. In the next sections, we will discuss how the Upgrade Assessment DA becomes the basis for completing the customer's due diligence, and sets the stage for a quality upgrade of the customer's solution. When to use the other Decision Accelerator offerings After the Upgrade Assessment DA has been executed, the remaining DA offerings may also be needed in the due diligence process for the existing Microsoft Dynamics customer. In this section, we will discuss the scenarios that may call for the usage of the DA offerings, and which ones would apply to that particular scenario. From the Upgrade Assessment DA, the delivery team determines the existing business functions and requirements that need to be upgraded to the new release. Using the Fit Gap and Solution Blueprint DA offering, they can then determine and document how these requirements will be ported over. If meeting the requirement is more than implementing standard features, the approach maybe a re-configuration, custom code rewrite, or workflow setup. Additionally, if new features are required as part of the upgrade, these requirements should also be classified in the Fit Gap worksheet either as Fit or as Gap. They should also be further classified as Standard, Configuration, or Workflow as the case may be for the Fits, and Customization for the Gaps. The Architecture Assessment DA can be used determine the new hardware configuration for the upgraded solution. It can also be used to address any performance issues up-front through the execution of the Proof of Concept Benchmark sub-offering. The Scoping Assessment DA can be used to determine the effort, timeline, and resources needed to execute the upgrade. If it was determined with the Upgrade Assessment DA that new functionality will be introduced, the delivery team and the customer must also determine the Release plan. We will discuss upgrade approaches and Release planning in more detail in the next section. It is important to note that all three of the above Decision Accelerator Offerings— the Fit Gap and Solution Blueprint, the Architecture Assessment, and the Scoping Assessment can be executed together with the Upgrade Assessment DA as one engagement for the customer. The point of this section is not that each of these offerings needs to be positioned individually for the customer. On the contrary, depending on the scope, the delivery team could easily perform the exercise in tandem. The point of emphasis in this section for the reader is that if you are assessing an upgrade for the customer, you should be able to leverage the templates in each of the DA offerings, and combine them as you deem fit for your engagement. Lastly, the Proof of Concept DA offering and Business Case DA offering may also apply to an upgrade engagement, but typically only for a small subset of customers. Examples include customers who maybe on a very old version of the Microsoft Dynamics solution so that they pretty much need a re-implementation of the solution with the new version of the product, or customers that need complex functionality to be enabled as part of the upgrade. In both these cases, the customer may request the delivery team to prove out certain components of the solution prior to embarking on a full upgrade, in which case the Proof of Concept DA may be executed. They may also request assistance from the delivery team to assess the return on investment for the upgraded solution, in which case the Business Case DA may be employed. Determining the upgrade approach and release schedule As noted in the previous section, the customer and the delivery team should work together to select the right approach for the upgrade during the course of the upgrade diagnostics. Sure Step recommends two approaches to Upgrades: Technical upgrade: Use this approach if the upgrade mostly applies to application components, such as executable files, code components, and DLLs. This approach can be used to bring a customized solution to the latest release, provided the application functionality and business workflow stay relatively the same. Functional upgrade: Use this approach if new application functionality or major changes in the existing business workflows are desired during the course of the upgrade. Additional planning, testing, and rework of the existing solution are inherent in this complex upgrade process, and as such more aligned to a Functional upgrade. Functional upgrades are typically performed in multiple Releases. The following diagram depicts the two Upgrade approaches and the Release schedules. Depending on the scope of the upgrade, the customer engagement may have one or more delivery Releases. If for example, the customer's solution is on a supported upgrade path, the Technical Upgrade maybe delivered in a single Release using the Sure Step Upgrade project type. If the new solution requires several new processes to be enabled, the Functional Upgrade may be delivered in two or more Releases. For example, if the customer needs advanced supply chain functionality such as production scheduling and/or advanced warehousing to be enabled as part of the upgrade, the recommended approach is to first complete the Technical Upgrade using the Sure Step Upgrade project type to port the existing functionality over to the new product version in Release 1, then add the advanced supply chain functionality using the Rapid, Standard, Agile, or Enterprise project types in Release 2. As noted earlier, the DA offerings can be executed individually or in combination, depending on the customer engagement. Regardless of how they are executed, it is imperative that the customer and delivery team select the right approach and develop the necessary plans such as Project Plan, Resource Plan, Project Charter, and/or Communication Plan. These documents should form the basis for the upgrade delivery Proposal. When the Proposal and Statement of Work are approved, it is time to begin the execution of the solution upgrade.  
Read more
  • 0
  • 0
  • 5787

article-image-introducing-salesforce-chatter
Packt
21 Nov 2013
5 min read
Save for later

Introducing Salesforce Chatter

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) An overview of cloud computing Cloud computing is a subscription-based service that provides us with computing resources and networked storage space. It allows you to access your information anytime and from anywhere. The only requirement is that one must have an Internet connection. That's all. If you have a cloud-based setup, there is no need to maintain the server in the future. We can think of cloud computing as similar to our e-mail account. Think of your accounts such as Gmail, Hotmail, and so on. We just need a web browser and an Internet connection to access our information. We do not need separate software to access our e-mail account; it is different from the text editor installed on our computer. There is no need of physically moving storage and information; everything is up and running over there and not at our end. It is the same with cloud; we choose what has to be stored and accessed on cloud. You also don't have to pay an employee or contractor to maintain the server since it is based on the cloud. While traditional technologies and computer setup require you to be physically present at the same place to access information, cloud removes this barrier and allows us to access information from anywhere. Cloud computing helps businesses to perform better by allowing employees to work from remote locations (anywhere on the globe). It provides mobile access to information and flexibility to the working of a business organization. Depending on your needs, we can subscribe to the following type of clouds: Public cloud: This cloud can be accessed by subscribers who have an Internet connection and access to cloud storage Private cloud: This is accessed by a limited group of people or members of an organization Community cloud: This is a cloud that is shared between two or more organizations that have similar requirements Hybrid cloud: This is a combination of at least two clouds, where the clouds are a combination of public, private, or community Depending on your need, you have the ability to subscribe to a specific cloud provider. Cloud providers follow the pay-as-you-go method. It means that, if your technological needs change, you can purchase more and continue working on cloud. You do not have to worry about the storage configuration and management of servers because everything is done by your cloud provider. An overview of salesforce.com Salesforce.com is the leader in pay-as-you-go enterprise cloud computing. It specializes in CRM software products for sales and customer services and supplies products for building and running business apps. Salesforce has recently developed a social networking product called Chatter for its business apps. With the concept of no software or hardware required, we are up and running and seeing immediate positive effects on our business. It is a platform for creating and deploying apps for social enterprise. This does not require us to buy or manage servers, software, or hardware. Here you can focus fully on building apps that include mobile functionality, business processes, reporting, and search. All apps run on secure servers and proven services that scale, tune, and back up data automatically. Collaboration in the past Collaboration always plays a key role to improve business outcomes; it is a crucial necessity in any professional business. The central meaning of communication has changed over time. With changes in people's individual living situations as well as advancements in technology, how one communicates with the rest of the world has been altered. A century or two ago, people could communicate using smoke signals, carrier pigeons and drum beats, or speak to one another, that is, face-to-face communication. As the world and technology developed, we found that we could send longer messages from long distances with ease. This has caused a decline in face-to-face-interaction and a substantial growth in communication via technology. The old way of face-to-face interaction impacted the business process as there was a gap between the collaboration of the client, company, or employees situated in distant places. So it reduced the profit, ROI, as well as customer satisfaction. In the past, there was no faster way available for communication, so collaboration was a time-consuming task for business; its effect was the loss of client retention. Imagine a situation where a sales representative is near to closing a deal, but the decision maker is out of the office. In the past, there was no fast/direct way to communicate. Sometimes this lack of efficient communication impacted the business negatively, in addition to the loss of potential opportunities. Summary In this article we learned cloud computing and Salesforce.com, and discussed about collaboration in the new era by comparing it to the ancient age. We discussed and introduced Salesforce Chatter and its effect on ROI (Return of Investment). Resources for Article: Further resources on this subject: Salesforce CRM Functions [Article] Configuration in Salesforce CRM [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 5775

article-image-developing-middleware
Packt
08 Aug 2016
16 min read
Save for later

Developing Middleware

Packt
08 Aug 2016
16 min read
In this article by Doug Bierer, author of the book PHP 7 Programming Cookbook, we will cover the following topics: (For more resources related to this topic, see here.) Authenticating with middleware Making inter-framework system calls Using middleware to cross languages Introduction As often happens in the IT industry, terms get invented, and then used and abused. The term middleware is no exception. Arguably the first use of the term came out of the Internet Engineering Task Force (IETF) in the year 2000. Originally, the term was applied to any software which operates between the transport (that is, TCP/IP) and the application layer. More recently, especially with the acceptance of PHP Standard Recommendation number 7 (PSR-7), middleware, specifically in the PHP world, has been applied to the web client-server environment. Authenticating with middleware One very important usage of middleware is to provide authentication. Most web-based applications need the ability to verify a visitor via username and password. By incorporating PSR-7 standards into an authentication class, you will make it generically useful across the board, so to speak, being secure that it can be used in any framework that provides PSR-7-compliant request and response objects. How to do it… We begin by defining a ApplicationAclAuthenticateInterface class. We use this interface to support the Adapter software design pattern, making our Authenticate class more generically useful by allowing a variety of adapters, each of which can draw authentication from a different source (for example, from a file, using OAuth2, and so on). Note the use of the PHP 7 ability to define the return value data type: namespace ApplicationAcl; use PsrHttpMessage { RequestInterface, ResponseInterface }; interface AuthenticateInterface { public function login(RequestInterface $request) : ResponseInterface; } Note that by defining a method that requires a PSR-7-compliant request, and produces a PSR-7-compliant response, we have made this interface universally applicable. Next, we define the adapter that implements the login() method required by the interface. We make sure to use the appropriate classes, and define fitting constants and properties. The constructor makes use of ApplicationDatabaseConnection: namespace ApplicationAcl; use PDO; use ApplicationDatabaseConnection; use PsrHttpMessage { RequestInterface, ResponseInterface }; use ApplicationMiddleWare { Response, TextStream }; class DbTable implements AuthenticateInterface { const ERROR_AUTH = 'ERROR: authentication error'; protected $conn; protected $table; public function __construct(Connection $conn, $tableName) { $this->conn = $conn; $this->table = $tableName; } The core login() method extracts the username and password from the request object. We then do a straightforward database lookup. If there is a match, we store user information in the response body, JSON-encoded: public function login(RequestInterface $request) : ResponseInterface { $code = 401; $info = FALSE; $body = new TextStream(self::ERROR_AUTH); $params = json_decode($request->getBody()->getContents()); $response = new Response(); $username = $params->username ?? FALSE; if ($username) { $sql = 'SELECT * FROM ' . $this->table . ' WHERE email = ?'; $stmt = $this->conn->pdo->prepare($sql); $stmt->execute([$username]); $row = $stmt->fetch(PDO::FETCH_ASSOC); if ($row) { if (password_verify($params->password, $row['password'])) { unset($row['password']); $body = new TextStream(json_encode($row)); $response->withBody($body); $code = 202; $info = $row; } } } return $response->withBody($body)->withStatus($code); } } Best practice Never store passwords in clear text. When you need to do a password match, use password_verify(), which negates the need to reproduce the password hash. The Authenticate class is a wrapper for an adapter class that implements AuthenticationInterface. Accordingly, the constructor takes an adapter class as an argument, as well as a string that serves as the key in which authentication information is stored in $_SESSION: namespace ApplicationAcl; use ApplicationMiddleWare { Response, TextStream }; use PsrHttpMessage { RequestInterface, ResponseInterface }; class Authenticate { const ERROR_AUTH = 'ERROR: invalid token'; const DEFAULT_KEY = 'auth'; protected $adapter; protected $token; public function __construct( AuthenticateInterface $adapter, $key) { $this->key = $key; $this->adapter = $adapter; } In addition, we provide a login form with a security token, which helps prevent Cross Site Request Forgery (CSRF) attacks: public function getToken() { $this->token = bin2hex(random_bytes(16)); $_SESSION['token'] = $this->token; return $this->token; } public function matchToken($token) { $sessToken = $_SESSION['token'] ?? date('Ymd'); return ($token == $sessToken); } public function getLoginForm($action = NULL) { $action = ($action) ? 'action="' . $action . '" ' : ''; $output = '<form method="post" ' . $action . '>'; $output .= '<table><tr><th>Username</th><td>'; $output .= '<input type="text" name="username" /></td>'; $output .= '</tr><tr><th>Password</th><td>'; $output .= '<input type="password" name="password" />'; $output .= '</td></tr><tr><th>&nbsp;</th>'; $output .= '<td><input type="submit" /></td>'; $output .= '</tr></table>'; $output .= '<input type="hidden" name="token" value="'; $output .= $this->getToken() . '" />'; $output .= '</form>'; return $output; } Finally, the login() method in this class checks whether the token is valid. If not, a 400 response is returned. Otherwise, the login() method of the adapter is called: public function login( RequestInterface $request) : ResponseInterface { $params = json_decode($request->getBody()->getContents()); $token = $params->token ?? FALSE; if (!($token && $this->matchToken($token))) { $code = 400; $body = new TextStream(self::ERROR_AUTH); $response = new Response($code, $body); } else { $response = $this->adapter->login($request); } if ($response->getStatusCode() >= 200 && $response->getStatusCode() < 300) { $_SESSION[$this->key] = json_decode($response->getBody()->getContents()); } else { $_SESSION[$this->key] = NULL; } return $response; } } How it works… Go ahead and define the classes presented in this recipe, summarized in the following table: Class Discussed in these steps ApplicationAclAuthenticateInterface 1 ApplicationAclDbTable 2 - 3 ApplicationAclAuthenticate 4 - 6 You can then define a chap_09_middleware_authenticate.php calling program that sets up autoloading and uses the appropriate classes: <?php session_start(); define('DB_CONFIG_FILE', __DIR__ . '/../config/db.config.php'); define('DB_TABLE', 'customer_09'); define('SESSION_KEY', 'auth'); require __DIR__ . '/../Application/Autoload/Loader.php'; ApplicationAutoloadLoader::init(__DIR__ . '/..'); use ApplicationDatabaseConnection; use ApplicationAcl { DbTable, Authenticate }; use ApplicationMiddleWare { ServerRequest, Request, Constants, TextStream }; You are now in a position to set up the authentication adapter and core class: $conn = new Connection(include DB_CONFIG_FILE); $dbAuth = new DbTable($conn, DB_TABLE); $auth = new Authenticate($dbAuth, SESSION_KEY); Be sure to initialize the incoming request, and set up the request to be made to the authentication class: $incoming = new ServerRequest(); $incoming->initialize(); $outbound = new Request(); Check the incoming class method to see if it is POST. If so, pass a request to the authentication class: if ($incoming->getMethod() == Constants::METHOD_POST) { $body = new TextStream(json_encode( $incoming->getParsedBody())); $response = $auth->login($outbound->withBody($body)); } $action = $incoming->getServerParams()['PHP_SELF']; ?> The display logic looks like this: <?= $auth->getLoginForm($action) ?> Here is the output from an invalid authentication attempt. Notice the 401 status code on the right. In this illustration, you could add a var_dump() of the response object. Here is a successful authentication: Making inter-framework system calls One of the primary reasons for the development of PSR-7 (and middleware) was a growing need to make calls between frameworks. It is of interest to note that the main documentation for PSR-7 is hosted by PHP Framework Interop Group (PHP-FIG). How to do it… The primary mechanism used in middleware inter-framework calls is to create a driver program that executes framework calls in succession, maintaining a common request and response object. The request and response objects are expected to represent PsrHttpMessageServerRequestInterface and PsrHttpMessageResponseInterface respectively. For the purposes of this illustration, we define a middleware session validator. The constants and properties reflect the session thumbprint, which is a term we use to incorporate factors such as the website visitor's IP address, browser, and language settings: namespace ApplicationMiddleWareSession; use InvalidArgumentException; use PsrHttpMessage { ServerRequestInterface, ResponseInterface }; use ApplicationMiddleWare { Constants, Response, TextStream }; class Validator { const KEY_TEXT = 'text'; const KEY_SESSION = 'thumbprint'; const KEY_STATUS_CODE = 'code'; const KEY_STATUS_REASON = 'reason'; const KEY_STOP_TIME = 'stop_time'; const ERROR_TIME = 'ERROR: session has exceeded stop time'; const ERROR_SESSION = 'ERROR: thumbprint does not match'; const SUCCESS_SESSION = 'SUCCESS: session validates OK'; protected $sessionKey; protected $currentPrint; protected $storedPrint; protected $currentTime; protected $storedTime; The constructor takes a ServerRequestInterface instance and the session as arguments. If the session is an array (such as $_SESSION), we wrap it in a class. The reason why we do this is in case we are passed a session object, such as JSession used in Joomla. We then create the thumbprint using the factors previously mentioned factors. If the stored thumbprint is not available, we assume this is the first time, and store the current print as well as stop time, if this parameter is set. We used md5() because it's a fast hash, and is not exposed externally and is therefore useful to this application: public function __construct( ServerRequestInterface $request, $stopTime = NULL) { $this->currentTime = time(); $this->storedTime = $_SESSION[self::KEY_STOP_TIME] ?? 0; $this->currentPrint = md5($request->getServerParams()['REMOTE_ADDR'] . $request->getServerParams()['HTTP_USER_AGENT'] . $request->getServerParams()['HTTP_ACCEPT_LANGUAGE']); $this->storedPrint = $_SESSION[self::KEY_SESSION] ?? NULL; if (empty($this->storedPrint)) { $this->storedPrint = $this->currentPrint; $_SESSION[self::KEY_SESSION] = $this->storedPrint; if ($stopTime) { $this->storedTime = $stopTime; $_SESSION[self::KEY_STOP_TIME] = $stopTime; } } } It's not required to define __invoke(), but this magic method is quite convenient for standalone middleware classes. As is the convention, we accept ServerRequestInterface and ResponseInterface instances as arguments. In this method we simply check to see if the current thumbprint matches the one stored. The first time, of course, they will match. But on subsequent requests, the chances are an attacker intent on session hijacking will be caught out. In addition, if the session time exceeds the stop time (if set), likewise, a 401 code will be sent: public function __invoke( ServerRequestInterface $request, Response $response) { $code = 401; // unauthorized if ($this->currentPrint != $this->storedPrint) { $text[self::KEY_TEXT] = self::ERROR_SESSION; $text[self::KEY_STATUS_REASON] = Constants::STATUS_CODES[401]; } elseif ($this->storedTime) { if ($this->currentTime > $this->storedTime) { $text[self::KEY_TEXT] = self::ERROR_TIME; $text[self::KEY_STATUS_REASON] = Constants::STATUS_CODES[401]; } else { $code = 200; // success } } if ($code == 200) { $text[self::KEY_TEXT] = self::SUCCESS_SESSION; $text[self::KEY_STATUS_REASON] = Constants::STATUS_CODES[200]; } $text[self::KEY_STATUS_CODE] = $code; $body = new TextStream(json_encode($text)); return $response->withStatus($code)->withBody($body); } We can now put our new middleware class to use. The main problems with inter-framework calls, at least at this point, are summarized here. Accordingly, how we implement middleware depends heavily on the last point: Not all PHP frameworks are PSR-7-compliant Existing PSR-7 implementations are not complete All frameworks want to be the "boss" As an example, have a look at the configuration files for Zend Expressive, which is a self-proclaimed PSR7 Middleware Microframework. Here is the file, middleware-pipeline.global.php, which is located in the config/autoload folder in a standard Expressive application. The dependencies key is used to identify middleware wrapper classes that will be activated in the pipeline: <?php use ZendExpressiveContainerApplicationFactory; use ZendExpressiveHelper; return [ 'dependencies' => [ 'factories' => [ HelperServerUrlMiddleware::class => HelperServerUrlMiddlewareFactory::class, HelperUrlHelperMiddleware::class => HelperUrlHelperMiddlewareFactory::class, // insert your own class here ], ], Under the middleware_pipline key you can identify classes that will be executed before or after the routing process occurs. Optional parameters include path, error, and priority: 'middleware_pipeline' => [ 'always' => [ 'middleware' => [ HelperServerUrlMiddleware::class, ], 'priority' => 10000, ], 'routing' => [ 'middleware' => [ ApplicationFactory::ROUTING_MIDDLEWARE, HelperUrlHelperMiddleware::class, // insert reference to middleware here ApplicationFactory::DISPATCH_MIDDLEWARE, ], 'priority' => 1, ], 'error' => [ 'middleware' => [ // Add error middleware here. ], 'error' => true, 'priority' => -10000, ], ], ]; Another technique is to modify the source code of an existing framework module, and make a request to a PSR-7-compliant middleware application. Here is an example modifying a Joomla! installation to include a middleware session validator. Next, add this code the end of the index.php file in the /path/to/joomla folder. Since Joomla! uses Composer, we can leverage the Composer autoloader: session_start(); // to support use of $_SESSION $loader = include __DIR__ . '/libraries/vendor/autoload.php'; $loader->add('Application', __DIR__ . '/libraries/vendor'); $loader->add('Psr', __DIR__ . '/libraries/vendor'); We can then create an instance of our middleware session validator, and make a validation request just before $app = JFactory::getApplication('site');: $session = JFactory::getSession(); $request = (new ApplicationMiddleWareServerRequest())->initialize(); $response = new ApplicationMiddleWareResponse(); $validator = new ApplicationSecuritySessionValidator( $request, $session); $response = $validator($request, $response); if ($response->getStatusCode() != 200) { // take some action } How it works… First, create the ApplicationMiddleWareSessionValidator test middleware class described in steps 2 - 5. Then you will need to go to getcomposer.org and follow the directions to obtain Composer. Next, build a basic Zend Expressive application, as shown next. Be sure to select No when prompted for minimal skeleton: cd /path/to/source/for/this/chapter php composer.phar create-project zendframework/zend-expressive-skeleton expressive This will create a folder /path/to/source/for/this/chapter/expressive. Change to this directory. Modify public/index.php as follows: <?php if (php_sapi_name() === 'cli-server' && is_file(__DIR__ . parse_url( $_SERVER['REQUEST_URI'], PHP_URL_PATH)) ) { return false; } chdir(dirname(__DIR__)); session_start(); $_SESSION['time'] = time(); $appDir = realpath(__DIR__ . '/../../..'); $loader = require 'vendor/autoload.php'; $loader->add('Application', $appDir); $container = require 'config/container.php'; $app = $container->get(ZendExpressiveApplication::class); $app->run(); You will then need to create a wrapper class that invokes our session validator middleware. Create a SessionValidateAction.php file that needs to go in the /path/to/source/for/this/chapter/expressive/src/App/Action folder. For the purposes of this illustration, set the stop time parameter to a short duration. In this case, time() + 10 gives you 10 seconds: namespace AppAction; use ApplicationMiddleWareSessionValidator; use ZendDiactoros { Request, Response }; use PsrHttpMessageResponseInterface; use PsrHttpMessageServerRequestInterface; class SessionValidateAction { public function __invoke(ServerRequestInterface $request, ResponseInterface $response, callable $next = null) { $inbound = new Response(); $validator = new Validator($request, time()+10); $inbound = $validator($request, $response); if ($inbound->getStatusCode() != 200) { session_destroy(); setcookie('PHPSESSID', 0, time()-300); $params = json_decode( $inbound->getBody()->getContents(), TRUE); echo '<h1>',$params[Validator::KEY_TEXT],'</h1>'; echo '<pre>',var_dump($inbound),'</pre>'; exit; } return $next($request,$response); } } You will now need to add the new class to the middleware pipeline. Modify config/autoload/middleware-pipeline.global.php as follows. Modifications are shown in bold: <?php use ZendExpressiveContainerApplicationFactory; use ZendExpressiveHelper; return [ 'dependencies' => [ 'invokables' => [ AppActionSessionValidateAction::class => AppActionSessionValidateAction::class, ], 'factories' => [ HelperServerUrlMiddleware::class => HelperServerUrlMiddlewareFactory::class, HelperUrlHelperMiddleware::class => HelperUrlHelperMiddlewareFactory::class, ], ], 'middleware_pipeline' => [ 'always' => [ 'middleware' => [ HelperServerUrlMiddleware::class, ], 'priority' => 10000, ], 'routing' => [ 'middleware' => [ ApplicationFactory::ROUTING_MIDDLEWARE, HelperUrlHelperMiddleware::class, AppActionSessionValidateAction::class, ApplicationFactory::DISPATCH_MIDDLEWARE, ], 'priority' => 1, ], 'error' => [ 'middleware' => [ // Add error middleware here. ], 'error' => true, 'priority' => -10000, ], ], ]; You might also consider modifying the home page template to show the status of $_SESSION. The file in question is /path/to/source/for/this/chapter/expressive/templates/app/home-page.phtml. Simply adding var_dump($_SESSION) should suffice. Initially, you should see something like this: After 10 seconds, refresh the browser. You should now see this: Using middleware to cross languages Except in cases where you are trying to communicate between different versions of PHP, PSR-7 middleware will be of minimal use. Recall what the acronym stands for: PHP Standards Recommendations. Accordingly, if you need to make a request to an application written in another language, treat it as you would any other web service HTTP request. How to do it… In the case of PHP 4, you actually have a chance in that there was limited support for object-oriented programming. There is not enough space to cover all changes, but we present a potential PHP 4 version of ApplicationMiddleWareServerRequest. The first thing to note is that there are no namespaces! Accordingly, we use a classname with underscores, _, in place of namespace separators: class Application_MiddleWare_ServerRequest extends Application_MiddleWare_Request implements Psr_Http_Message_ServerRequestInterface { All properties are identified in PHP 4 using the key word var: var $serverParams; var $cookies; var $queryParams; // not all properties are shown The initialize() method is almost the same, except that syntax such as $this->getServerParams()['REQUEST_URI'] was not allowed in PHP 4. Accordingly, we need to split this out into a separate variable: function initialize() { $params = $this->getServerParams(); $this->getCookieParams(); $this->getQueryParams(); $this->getUploadedFiles; $this->getRequestMethod(); $this->getContentType(); $this->getParsedBody(); return $this->withRequestTarget($params['REQUEST_URI']); } All of the $_XXX super-globals were present in later versions of PHP 4: function getServerParams() { if (!$this->serverParams) { $this->serverParams = $_SERVER; } return $this->serverParams; } // not all getXXX() methods are shown to conserve space The null coalesce" operator was only introduced in PHP 7. We need to use isset(XXX) ? XXX : ''; instead: function getRequestMethod() { $params = $this->getServerParams(); $method = isset($params['REQUEST_METHOD']) ? $params['REQUEST_METHOD'] : ''; $this->method = strtolower($method); return $this->method; } The JSON extension was not introduced until PHP 5. Accordingly, we need to be satisfied with raw input. We could also possibly use serialize() or unserialize() in place of json_encode() and json_decode(): function getParsedBody() { if (!$this->parsedBody) { if (($this->getContentType() == Constants::CONTENT_TYPE_FORM_ENCODED || $this->getContentType() == Constants::CONTENT_TYPE_MULTI_FORM) && $this->getRequestMethod() == Constants::METHOD_POST) { $this->parsedBody = $_POST; } elseif ($this->getContentType() == Constants::CONTENT_TYPE_JSON || $this->getContentType() == Constants::CONTENT_TYPE_HAL_JSON) { ini_set("allow_url_fopen", true); $this->parsedBody = file_get_contents('php://stdin'); } elseif (!empty($_REQUEST)) { $this->parsedBody = $_REQUEST; } else { ini_set("allow_url_fopen", true); $this->parsedBody = file_get_contents('php://stdin'); } } return $this->parsedBody; } The withXXX() methods work pretty much the same in PHP 4: function withParsedBody($data) { $this->parsedBody = $data; return $this; } Likewise, the withoutXXX() methods work the same as well: function withoutAttribute($name) { if (isset($this->attributes[$name])) { unset($this->attributes[$name]); } return $this; } } For websites using other languages, we could use the PSR-7 classes to formulate requests and responses, but would then need to use an HTTP client to communicate with the other website. Here is the example: $request = new Request( TARGET_WEBSITE_URL, Constants::METHOD_POST, new TextStream($contents), [Constants::HEADER_CONTENT_TYPE => Constants::CONTENT_TYPE_FORM_ENCODED, Constants::HEADER_CONTENT_LENGTH => $body->getSize()] ); $data = http_build_query(['data' => $request->getBody()->getContents()]); $defaults = array( CURLOPT_URL => $request->getUri()->getUriString(), CURLOPT_POST => true, CURLOPT_POSTFIELDS => $data, ); $ch = curl_init(); curl_setopt_array($ch, $defaults); $response = curl_exec($ch); curl_close($ch); Summary In this article, we learned about providing authentication to a system, to make calls between frameworks, and to make a request to an application written in another language. Resources for Article: Further resources on this subject: Middleware [Article] Building a Web Application with PHP and MariaDB – Introduction to caching [Article] Data Tables and DataTables Plugin in jQuery 1.3 with PHP [Article]
Read more
  • 0
  • 0
  • 5757
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-oracle-siebel-crm-8-user-properties-specialized-application-logic
Packt
28 Apr 2011
5 min read
Save for later

Oracle Siebel CRM 8: User Properties for Specialized Application Logic

Packt
28 Apr 2011
5 min read
Understanding user properties User properties are child object types which are available for the following object types in the Siebel Repository: Applet, Control, List Column Application Business Service Business Component, Field Integration Object, Integration Component, Integration Component Field View To view the User Property (or User Prop as it is sometimes abbreviated) object type we typically have to modify the list of displayed types for the Object Explorer window. This can be achieved by selecting the Options command in the View menu. In the Object Explorer tab of the Development Tools Options dialog, we can select the object types for display as shown in the following screenshot: In the preceding example, the Business Component User Prop type is enabled for display. After confirming the changes in the Development Tools Options dialog by clicking the OK button, we can for example navigate to the Account business component and review its existing user properties by selecting the Business Component User Prop type in the Object Explorer. The following screenshot shows the list of user properties for the Account business component: The screenshot also shows the standard Properties window on the right. This is to illustrate that a list of user properties, which mainly define a Name/Value pair, can be simply understood as an extension to an object type's usual properties which are accessible by means of the Properties window and represent Name/Value pairs as well.Because an additional user property is just a new record in the Siebel Repository, the list of user properties for a given parent record is theoretically infinite. This allows developers to define a rich set of business logic as a simple list of Name/Value pairs instead of having to write program code.The Name property of a user property definition must use a reserved name - and optional sequence number - as defined by Oracle engineering. The Value property must also follow the syntax defined for the special purpose of the user property. Because an additional user property is just a new record in the Siebel Repository, the list of user properties for a given parent record is theoretically infinite. This allows developers to define a rich set of business logic as a simple list of Name/Value pairs instead of having to write program code. The Name property of a user property definition must use a reserved name—and optional sequence number—as defined by Oracle engineering. The Value property must also follow the syntax defined for the special purpose of the user property. Did you know? The list of available names for a user property depends on the object type (for example Business Component) and the C++ class associated with the object definition. For example, the business component Account is associated with the CSSBCAccountSIS class which defines a different range of available user property names than other classes. Many user property names are officially documented in the Siebel Developer's Reference guide in the Siebel Bookshelf. We can find the guide online at the following URL: http://download.oracle.com/docs/cd/E14004_01/books/ToolsDevRef/ToolsDevRef_UserProps.html The user property names described in this guide are intended for use by custom developers. Any other user property which we may find in the Siebel Repository but which is not officially documented should be considered an internal user property of Oracle engineering. Because the internal user properties could change in a future version of Siebel CRM in both syntax and behavior without prior notice, it is highly recommended to use only user properties which are documented by Oracle. Another way to find out which user property names are made available by Oracle to customers is to click the dropdown icon in the Name property of a user property record. This opens the user property pick list which displays a wide range of officially documented user properties along with a description text. Multi-instance user properties Some user properties can be instantiated more than once. If this is the case a sequence number is used to generate a distinguished name. For example, the On Field Update Set user property used on business components uses a naming convention as displayed in the following screenshot: In the previous example, we can see four instances of the On Field Update Set user property distinguished by a sequential numeric suffix (1 to 4). Because it is very likely that Oracle engineers and custom developers add additional instances of the same user property while working on the next release, Oracle provides a customer allowance gap of nine instances for the next sequence number. In the previous example, a custom developer could continue the set of On Field Update Set user properties with a suffix of 13. By doing so, the custom developer will most likely avoid conflicts during an upgrade to a newer version of Siebel CRM. The Oracle engineer would continue with a suffix of five and upgrade conflicts will only occur when Oracle defines more than eight additional instances. The gap of nine also ensures that the sequence of multi-instance user properties is still functional when one or more of the user property records are marked as inactive. In the following sections, we will describe the most important user properties for the business and user interface layer. In addition, we will examine case study scenarios to identify best practices for using user properties to define specialized behavior of Siebel CRM applications. Business component and field user properties On the business layer of the Siebel Repository, user properties are widely used to control specialized behavior of business components and fields. The following table describes the most important user properties on the business component level. The Multiple Instances column contains Yes for all user properties which can be instantiated more than once per parent object: Source: Siebel Developer's Reference, Version 8.1: http://download.oracle.com/docs/cd/E14004_01/books/ToolsDevRef/booktitle.html
Read more
  • 0
  • 0
  • 5753

article-image-building-your-hello-mediation-project-using-ibm-websphere-process-server-7-and-enterpr
Packt
13 Jul 2010
7 min read
Save for later

Building Your Hello Mediation Project using IBM WebSphere Process Server 7 and Enterprise Service Bus 7

Packt
13 Jul 2010
7 min read
This article will help you build your first HelloWorld WebSphere Enterprise Service Bus (WESB) mediation application and WESB mediation flow-based application. This article will then give an overview of Service Message Object (SMO) standards and how they relate to building applications using an SOA approach. At the completion of this article, you will have an understanding of how to begin building, testing, and deploying WESB mediations including: Overview of WESB-based programming fundamentals including WS-*? standards and Service Message Objects (SMOs) Building the first mediation module project in WID Using mediation flows Deploying the module on a server and testing the project Logging, debugging, and troubleshooting basics Exporting projects from WID WS standards Before we get down to discussing mediation flows, it is essential to take a moment and acquaint ourselves with some of the Web Service (WS) standards that WebSphere Integration Developers (WIDs) comply with. By using WIDs, user-friendly visual interfaces and drag-and-drop paradigms, developers are automatically building process flows that are globally standardized and compliant. This becomes critical in an ever-changing business environment that demands flexible integration with business partners. Here are some of the key specifications that you should be aware of as defined by the Worldwide Web Consortium (W3C): WS-Security: This is one of the secure standards used to secure Web Service at the message level, independent of the transport protocol. To learn more about WID and WS-Security refer to http://publib.boulder.ibm.com/infocenter/dmndhelp/v7r0mx/topic/com.ibm.wbit.help.runtime.doc/deploy/topics/cwssecurity.html. WS-Policy: A framework that helps define characteristics about Web Services through policies. To learn more about WS-Policy refer to: http://www.w3.org/TR/ws-policy/. WS-Addressing: This specification aids interoperability between Web Services, by standardizing ways to address Web Services and by providing addressing information in messages. In version 7.0, enhancements were made to WID to provide Web Services support for WS-Addressing and Attachments. For more details about WS-Addressing refer to http://www.w3.org/Submission/ws-addressing/. What are mediation flows? SOA, service consumers and service providers use an ESB as a communication vehicle. When services are loosely coupled through an ESB, the overall system has more flexibility and can be easily modified and enhanced to meet changing business requirements. We also saw that an ESB by itself is an enabler of many patterns and enables protocol transformations and provides mediation services, which can inspect, modify, augment and transform a message as it flows from requestor to provider. In WebSphere Enterprise Service, mediation modules provide the ESB functionality. The heart of a mediation module is the mediation flow component, which provides the mediation logic applied to a message as it flows from a service consumer to a provider. The mediation flow component is a type of SCA component that is typically used, but not limited to a mediation module. A mediation flow component contains a source interface and target references similar to other SCA components. The source interface is described using WSDL interface and must match the WSDL definition of the export to which it is wired. The target references are described using WSDL and must match the WSDL definitions of the imports or Java components to which they are wired. The mediation flow component handles most of the ESB functions including: Message filtering which is the capability to filter messages based on the content of the incoming message. Dynamic routing and selection of service provider, which is the capability to route incoming messages to the appropriate target at runtime based on predefined policies and rules. Message transformation, which is the capability to transform messages between source and target interfaces. This transformation can be defined using XSL stylesheets or business object maps. Message manipulation/enrichment, which is the capability to manipulate or enrich incoming message fields before they are sent to the target. This capability also allows you to do database lookups as needed. If the previous functionalities do not fit your requirements, you have the capability of defining a custom mediation behavior in JAVA. The following diagram describes the architectural layout of a mediation flow:   In the diagram, on the left-hand side you see the single service requester or source interface and on the right-hand side are the multiple service providers or target references. The mediation flow is the set of logical processing steps that efficiently route messages from the service requestor to the most appropriate service provider and back to the service requestor for that particular transaction and business environment. A mediation flow can be a request flow or a request and response flow. In a request flow message the sequence of processing steps is defined only from the source to the target. No message is returned to the source. However, in a request and response flow message the sequence of processing steps are defined from the single source to the multiple targets and back from the multiple targets to the single source. In the next section we take a deeper look into message objects and how they are handled in the mediation flows. Mediation primitives What are the various mediation primitives available in WESB? Mediation primitives are the core building blocks used to process the request and response messages in a mediation flow. Built-in primitives, which perform some predefined function that is configurable through the use of properties. Custom mediation primitives, which allow you to implement the function in Java. Mediation primitives have input and output terminals, through which the message flows. Almost all primitives have only one input terminal, but multiple input terminals are possible for custom mediation primitives. Primitives can have zero, one, or more output terminals. There is also a special terminal called the fail terminal through which the message is propagated when the processing of a primitive results in an exception. The different types of mediation primitives that are available in a mediation flow, along with their purpose, are summarized in the following table: Service invocation   Service Invoke Invoke external service, message modified with result Routing primitives   Message Filter Selectively forward messages based on element values Type Filter Selectively forward messages based on element types Routing primitives   Endpoint Lookup Find potential endpoint from a registry query Fan Out Starts iterative or split flow for aggregation Fan In Check completion of a split or aggregated flow Policy Resolution Set policy constrains from a registry query Flow Order Executes or fires the output terminal in a defined order Gateway Endpoint Lookup Finds potential endpoint in special cases from a registry SLA Check Verifies if the message complies with the SLA UDDI Endpoint Lookup Finds potential endpoints from a UDDI registry query Transformation primitives   XSL Transformation Update and modify messages using XSLT Business Object Map Update and modify messages using business object maps Message element setter Set, update, copy, and delete message elements Set message type Set elements to a more specific type Database Lookup Set elements from contents within a database Data Handler Update and modify messages using a data handler Custom Mediation Read, update, and modify messages using Java code SOAP header setter Read, update, copy, and delete SOAP header elements HTTP header setter Read, update, copy, and delete HTTP header elements JMS header setter Read, update, copy, and delete JMS header elements MQ header setter Read, update, copy, and delete MQ header elements Tracing primitives   Message logger Write a log message to a database or a custom destination Event emitter Raise a common base event to CEI Trace Send a trace message to a file or system out for debugging Error handling primitives   Stop Stop a single path in flow without an exception Fail Stop the entire flow and raise an exception Message Validator Validate a message against a schema and assertions Mediation subflow primitive   Subflow Represents a user-defined subflow
Read more
  • 0
  • 0
  • 5723

article-image-simple-slack-websocket-integrations-10-lines-python
Bradley Cicenas
09 Sep 2016
3 min read
Save for later

Simple Slack Websocket Integrations in <10 lines of Python

Bradley Cicenas
09 Sep 2016
3 min read
If you use Slack, you've probably added a handful of integrations for your team from the ever-growing App Directory, and maybe even had an idea for your own Slack app. While the Slack API is featureful and robust, writing your own integration can be exceptionally easy. Through the Slack RTM (Real Time Messaging) API, you can write our own basic integrations in just a few lines of Python using the SlackSocket library. Want an accessible introduction to Python that's comprehensive enough to give you the confidence you need to dive deeper? This week, follow our Python Fundamentals course inside Mapt. It's completely free - so what have you got to lose? Structure Our integration will be structured with the following basic components: Listener Integration/bot logic Response The listener watches for one or more pre-defined "trigger" words, while the response posts the result of our intended task. Basic Integration We'll start by setting up SlackSocket with our API token: fromslacksocketimportSlackSocket slack=SlackSocket('<slack-token>', event_filter=['message']) By default, SlackSocket will listen for all Slack events. There are a lot of different events sent via RTM, but we're only concerned with 'message' events for our integration, so we've set an event_filter for only this type. Using the SlackSocketevents() generator, we'll read each 'message' event that comes in and can act on various conditions: for e inslack.events(): ife.event['text'] =='!hello': slack.send_msg('it works!', channel_name=e.event['channel']) If our message text matches the string '!hello', we'll respond to the source channel of the event with a given message('it works!'). At this point, we've created a complete integration that can connect to Slack as a bot user(or regular user), follow messages, and respond accordingly. Let's build something a bit more useful, like a password generator for throwaway accounts. Expanding Functionality For this integration command, we'll write a simple function to generate a random alphanumeric string 15 characters long: import random import string defrandomstr(): chars=string.ascii_letters+string.digits return''.join(random.choice(chars) for _ inrange(15)) Now we're ready to provide our random string generator to the rest of the team using the same chat logic as before, responding to the source channel with our generated password: for e inslack.events(): e.event['text'].startswith('!random'): slack.send_msg(randomstr(), channel_name=e.event['channel']) Altogether: import random import string fromslacksocketimportSlackSocket slack=SlackSocket('<slack-token>', event_filter=['message']) defrandomstr(): chars=string.ascii_letters+string.digits return''.join(random.choice(chars) for _ inrange(15)) for e inslack.events(): ife.event['text'].startswith('!random'): slack.send_msg(randomstr(), channel_name=e.event['channel']) And the results:  A complete integration in 10 lines of Python. Not bad! Beyond simplicity, SlackSocket provides a great deal of flexibility for writing apps, bots, or integrations. In the case of massive Slack groups with several thousand users, messages are buffered locally to ensure that none are missed. Dropped websocket connections are automatically re-connected as well, making it an ideal base for chat client. The code for SlackSocket is available on GitHub, and as always, we welcome any contributions or feature requests! About the author Bradley Cicenas is a New York City-based infrastructure engineer with an affinity for microservices, systems design, data science, and stoops.
Read more
  • 0
  • 0
  • 5703

article-image-introduction-object-oriented-programming-using-python-javascript-and-c
Packt
17 Feb 2016
5 min read
Save for later

Introduction to Object-Oriented Programming using Python, JavaScript, and C#

Packt
17 Feb 2016
5 min read
In this extract from Learning Object-Oriented Programming by Gaston Hillar, we will show you the benefits of thinking in an object-oriented way. From encapsulation and inheritance to polymorphism or object-overloading, we will hammer home the thought process that must be mastered in order to truly make the most of the object-oriented approach. In this article, we will see how to generate blueprints for objects and we will design classes which include the attributes or fields that provide data for each instance. We will explore the different object-oriented approaches in different languages, such as Python, JavaScript, and C#. Generating blueprints for objects Imagine that you want to draw and calculate the areas of four different rectangles. You will end up with four rectangles each with different widths, heights, and areas. Now imagine having a blueprint to simplify the process of drawing each different rectangle. In object-oriented programming, a class is a blueprint or a template definition from which the objects are created. Classes are models that define the state and behavior of an object. After defining a class that determines the state and behavior of a rectangle, we can use it to generate objects that represent the state and behavior of each real-world rectangle. Objects are also known as instances. For example, we can say each rectangle object is an instance of the rectangle class. The following image shows four rectangle instances, with their widths and heights specified: Rectangle #1, Rectangle #2, Rectangle #3, and Rectangle #4. We can use a rectangle class as a blueprint to generate the four different rectangle instances. It is very important to understand the difference between a class and the objects or instances generated through its usage. Object-oriented programming allows us to discover the blueprint we used to generate a specific object. Thus, we are able to infer that each object is an instance of the rectangle class. Recognizing attributes/fields Now, we need to design the classes to include the attributes that provide the required data to each instance. In other words, we have to make sure that each class has the necessary variables that encapsulate all the data required by the objects to perform all the tasks. Let's start with the Square class. It is necessary to know the length of the sides for each instance of this class - for each square object. Therefore, we need an encapsulated variable that allows each instance of this class to specify the value of the length of a side. The variables defined in a class to encapsulate data for each instance of the class are known as attributes or fields. Each instance has its own independent value for the attributes or fields defined in the class. The Square class defines a floating point attribute named LengthOfSide whose initial value is equal to 0 for any new instance of the class. After you create an instance of the Square class, it is possible to change the value of the LengthOfSide attribute. For example, imagine that you create two instances of the Square class. One of the instances is named square1, and the other is square2. The instance names allow you to access the encapsulated data for each object, and therefore, you can use them to change the values of the exposed attributes. Imagine that our object-oriented programming language uses a dot (.) to allow us to access the attributes of the instances. So, square1.LengthOfSide provides access to the length of side for the Square instance named square1, and square2.LengthOfSide does the same for the Square instance named square2. You can assign the value 10 to square1.LengthOfSide and 20 to square2.LengthOfSide. This way, each Square instance is going to have a different value for their LengthOfSide attribute. Now, let's move to the Rectangle class. We can define two floating-point attributes for this class: Width and Height. Their initial values are also going to be 0. Then, you can create two instances of the Rectangle class: rectangle1 and rectangle2. You can assign the value 10 to rectangle1.Width and 20 to rectangle1.Height. This way, rectangle1 represents a 10 x 20 rectangle. You can assign the value 30 to rectangle2.Width and 50 to rectangle2.Height to make the second Rectangle instance representing a 30 x 50 rectangle. Object-oriented approaches in Python, JavaScript, and C# Python, JavaScript, and C# support object-oriented programming, also known as OOP. However, each programming language takes a different approach. Both Python and C# support classes and inheritance. Therefore, you can use the different syntax provided by each of these programming languages to declare the Shape class and its four subclasses. Then, you can create instances of each of the subclasses and call the different methods. On the other hand, JavaScript uses an object-oriented model that doesn't use classes. This object-oriented model is known as prototype-based programming. However, don't worry. Everything you have learned so far in your simple object-oriented design journey can be coded in JavaScript. Instead of using inheritance to achieve behavior reuse, we can expand upon existing objects. Thus, we can say that objects serve as prototypes in JavaScript. Instead of focusing on classes, we work with instances and decorate them to emulate inheritance in class-based languages. The object-oriented model named prototype-based programing is also known with other names such as classless programming, instance-based programming, or prototype-oriented programming. There are other important differences between Python, JavaScript, and C#. They have a great impact on the way you can code object-oriented designs. Carry on reading Learning Object-Oriented Programming to learn different ways to code the same object-oriented design in three programming languages. So what are you waiting for? Take a look at what else the book offers now!
Read more
  • 0
  • 0
  • 5658
article-image-content-based-routing-microsoft-platform
Packt
16 Sep 2010
9 min read
Save for later

Content Based Routing on Microsoft Platform

Packt
16 Sep 2010
9 min read
Use case McKeever Technologies is a medium-sized business, which manufactures latex products. They have recently grown in size through a series of small acquisitions of competitor companies. As a result, the organization has a mix of both home-grown applications and packaged line-of-business systems. They have not standardized their order management software and still rely on multiple systems, each of which houses details about a specific set of products. Their developers are primarily oriented towards .NET, but there are some parts of the organization that have deep Java expertise. Up until now, orders placed with McKeever Technologies were faxed to a call center and manually entered into the order system associated with the particular product. Also, when customers want to discover the state of their submitted order, they are forced to contact McKeever Technologies' call center and ask an agent to look up their order. The company realizes that in order to increase efficiency, reduce data entry error, and improve customer service they must introduce some automation to their order intake and query processes. McKeever Technologies receives less than one thousand orders per day and does not expect this number to increase exponentially in the coming years. Their current order management systems have either Oracle or SQL Server database backends and some of them offer SOAP service interfaces for basic operations. These systems do not all maintain identical service-level agreements; so the solution must be capable of handling expected or unexpected downtime of the target system gracefully. The company is looking to stand up a solution in less than four months while not introducing too much additional management overhead to an already over-worked IT maintenance organization. The solution is expected to live in production for quite some time and may only be revisited once a long-term order management consolidation strategy can be agreed upon. Key requirements The following are key requirements for a new software solution: Accept inbound purchase requests and determine which system to add them to based on which product has been ordered Support a moderate transaction volume and reliable delivery to target systems Enable communication with diverse systems through either web or database protocols. Additional facts The technology team has acquired the following additional facts that will shape their proposed solution: The number of order management systems may change over time as consolidation occurs and new acquisitions are made. A single customer may have orders on multiple systems. For example, a paint manufacturer may need different types of latex for different products. The customers will want a single view of all orders notwithstanding which order entry system they reside on. The lag between entry of an order and its appearance on a customer-facing website should be minimal (less than one hour). All order entry systems are on the same network. There are no occasionally connected systems (for example, remote locations that may potentially lose their network connectivity). Strategic direction is to convert Oracle systems to Microsoft SQL Server and Java to C#. The new order tracking system does not need to integrate with order fulfillment or other systems at launch. There are priorities for orders (for example, "I need it tomorrow" requires immediate processing and overnight shipment versus "I need it next week"). Legacy SQL Servers are SQL Server 2005 or 2008. No SQL Server 2000 systems. Pattern description The organization is trying to streamline data entry into multiple systems that perform similar functions. They wish to take in the same data (an order), but depending on attributes of the order, it should be loaded into one system or another. This looks like a content-based routing scenario. What is content-based routing? In essence, it is distributing data based on the values it contains. You would typically use this sort of pattern when you have a single capability (for example, ADD ORDER, LOOKUP EMPLOYEE, DELETE RESERVATION) spread across multiple systems. Unlike a publish/subscribe pattern where multiple downstream systems may all want the same message (that is, one-to-many), a content-based routing solution typically helps you steer a message to the system that can best handle the request. What is an alternative to implementing this routing pattern? You could define distinct channels for each downstream system and force the caller to pick the service they wish to consume. That is, for McKeever Technologies, the customer would call one service if they were ordering products A, B, or C, and use another service for products D, E, or F. This clearly fails the SOA rules of abstraction or encapsulation and forces the clients to maintain knowledge of the backend processing. The biggest remaining question is what is the best way to implement this pattern. We would want to make sure that the routing rules were easily maintained and could be modified without expensive redeployments or refactoring. Our routing criteria should be rich enough so that we can make decisions based on the content itself, header information, or metadata about the transmission. Candidate architectures A team of technologists have reviewed the use case and drafted three candidate solutions. Each candidate has its own strengths and weaknesses, but one of them will prove to be the best choice. Candidate architecture #1–BizTalk Server A BizTalk Server-based solution seems to be a good fit for this customer scenario. McKeever Technologies is primarily looking to automate existing processes and communicate with existing systems, which are both things that BizTalk does well. Solution design aspects We are dealing with a fairly low volume of data (1000 orders per day, and at most, 5000 queries of order status) and small individual message size. A particular order or status query should be no larger than 5KB in size, meaning that this falls right into the sweet spot of BizTalk data processing. This proposed system is responsible for accepting and processing new orders, which means that reliable delivery is critical. BizTalk can provide built-in quality of service, guaranteed through its store-and-forward engine, which only discards a message after it has successfully reached its target endpoint. Our solution also needs to be able to communicate with multiple line-of-business systems through a mix of web service and database interfaces. BizTalk Server offers a wide range of database adapters and natively communicates with SOAP-based endpoints. We are building a new solution which automates a formerly manual process, so we should be able to design a single external interface for publishing new orders and querying order status. But, in the case that we have to support multiple external-facing contracts, BizTalk Server makes it very easy to transform data to canonical messages at the point of entry into the BizTalk engine. This means that the internal processing of BizTalk can be built to support a single data format, while we can still enable slight variations of the message format to be transmitted by clients. Similarly, each target system will have a distinct data format that its interface accepts. Our solution will apply all of its business logic on the canonical data format and transform the data to the target system format at the last possible moment. This will make it easier to add new downstream systems without unsettling the existing endpoints and business logic. From a security standpoint, BizTalk allows us to secure the inbound transport channel and message payload on its way into the BizTalk engine. If transport security is adequate for this customer, then an SSL channel can be set up on the external facing interface. To assuage any fears of the customer that system or data errors can cause messages to get lost or "stuck", it is critical to include a proactive exception handling aspect. BizTalk Server surfaces exceptions through an administrator console. However, this does not provide a business-friendly way to discover and act upon errors. Fortunately for us, BizTalk enables us to listen for error messages and either re-route those messages or spin up an error-specific business process. For this customer, we could recommend either logging errors to a database where business users leverage a website interface to view exceptions, or, we can publish messages to a SharePoint site and build a process around fixing and resubmitting any bad orders. For errors that require immediate attention, we can also leverage BizTalk's native capability to send e-mail messages. We know that McKeever Technologies will eventually move to a single order processing system, so this solution will undergo changes at some point in the future. Besides this avenue of change, we could also experience changes to the inbound interfaces, existing downstream systems, or even the contents of the messages themselves. BizTalk has a strong "versioning" history that allows us to build our solution in a modular fashion and isolate points of change. Solution delivery aspects McKeever Technologies is not currently a BizTalk shop, so they will need to both acquire and train resources to effectively build their upcoming solution. Their existing developers, who are already familiar with Microsoft's .NET Framework, can learn how to construct BizTalk solutions in a fairly short amount of time. The tools to build BizTalk artifacts are hosted within Visual Studio.NET and BizTalk projects can reside alongside other .NET project types. Because the BizTalk-based messaging solution has a design paradigm (for example, publish/subscribe, distributed components to chain together) different from that of a typical custom .NET solution, understanding the toolset alone will not ensure delivery success. If McKeever Technologies decides to bring in a product like BizTalk Server, it will be vital for them to engage an outside expert to act as a solution architect and leverage their existing BizTalk experience when building this solution. Solution operation aspects Operationally, BizTalk Server provides a mature, rich interface for monitoring solution health and configuring runtime behavior. There is also a strong underlying set of APIs that can be leveraged using scripting technologies so that automation of routine tasks can be performed. While BizTalk Server has tools that will feel familiar to a Windows Administrator, the BizTalk architecture is unique in the Microsoft ecosystem and will require explicit staff training. Organizational aspects BizTalk Server would be a new technology for McKeever technologies so definitely there is risk involved. It becomes necessary to purchase licenses, provision environments, train users, and hire experts. While these are all responsible things to do when new technology is introduced, this does mean a fairly high startup cost to implement this solution. That said, McKeever technologies will need a long term integration solution as they attempt to modernize their IT landscape and be in better shape to absorb new organizations and quickly integrate with new systems. An investment in an enterprise service bus like BizTalk Server will pay long term dividends even if initial costs are high. Solution evaluation
Read more
  • 0
  • 0
  • 5636

article-image-nhibernate-2-mapping-relationships-and-fluent-mapping
Packt
19 May 2010
8 min read
Save for later

NHibernate 2: Mapping relationships and Fluent Mapping

Packt
19 May 2010
8 min read
(Read more interesting articles on Nhibernate 2 Beginner's Guide here.) Relationships Remember all those great relationships we created in our database to relate our tables together?. Basically, the primary types of relationships are as follows: One-to-many (OTM) Many-to-one (MTO) One-to-one (OTO) Many-to-many (MTM) We won't focus on the OTO relationship because it is really uncommon. In most situations, if there is a need for a one-to-one relationship, it should probably be consolidated into the main table. One-to-many relationships The most common type of relationship we will map is a one-to-many (OTM) and the other way—many-to-one (MTO). If you remember, these are just two different sides of the same relationship, as seen in the following screenshot: This is a simple one-to-many (OTM) relationship where a Contact can be associated with zero to many OrderHeader records (because the relationship fields allow nulls). Notice that the Foreign Key for the relationship is stored on the "many" side, ShipToContact_Id and BillToContact_Id on the OrderHeader table. In our mapping files, we can map this relationship from both sides. If you remember, our classes for these objects contain placeholders for each side of this relationship. On the OrderHeader side, we have a Contact object called BillToContact: private Contact _billToContact; public Contact BillToContact { get { return _billToContact; } set { _billToContact = value; } } On the Contact side, we have the inverse relationship mapped. From this vantage point, there could be SEVERAL OrderHeaders objects that this Contact object is associated with, so we needed a collection to map it: private IList<OrderHeader> _billTOrderHeaders; public IList<OrderHeader> BillTOrderHeaders { get { return _billTOrderHeaders; } set { _billTOrderHeaders = value; } } As we have mapped this collection in two separate classes, we also need to map it in two separate mapping files. Let's start with the OrderHeader side. As this is the "many" side of the one-to-many relationship, we need to use a many-to-one type to map it. Things to note here are the name and class attributes. name, again, is the property in our class that this field maps to, and class is the "other end" of the Foreign Key relationship or the Contact type in this case. <many-to-one name="BillToContact" class="Contact"> <column name="BillToContact_Id" length="4" sql-type="int" not-null="false"/> </many-to-one> Just like before, when we mapped our non-relational fields, the length, sql-type, and not-null attributes are optional. Now that we have the "one" side mapped, we need to map the "many" side. In the contact mapping file, we need to create a bag element to hold all of these OrderHeaders. A bag is the NHibernate way to say that it is an unordered collection allowing duplicated items. We have a name element to reference the class property just like all of our other mapping elements and a key child element to tell NHibernate which database column this field is meant to represent. <bag name="BillToOrderHeaders" inverse="true cascade="all-delete-orphan"> <key column="BillToContact_Id"/> <one-to-many class="BasicWebApplication.Common.DataObjects.OrderHeader, BasicWebApplication"/> </bag> If you look at the previous XML code, you will see that the one-to-many tag looks very similar to the many-to-one tag we just created for the other side. That's because this is the inverse side of the relationship. We even tell NHibernate that the inverse relationship exists by using the inverse attribute on the bag element. The class attribute on this tag is just the name of the class that represents the other side of the relationship. The cascade attribute tells NHibernate how to handle objects when we delete them. Another attribute we can add to the bag tag is the lazy attribute. This tells NHibernate to use "lazy loading", which means that the record won't be pulled from the database or loaded into memory until you actually use it. This is a huge performance gain because you only get data when you need it, without having to do anything. When I say "get Contact record with Id 14", NHibernate will go get the Contact record, but it won't retrieve the associated BillToOrderHeaders (OrderHeader records) until I reference Contact.BillToOrderHeaders to display or act on those objects in my code. By default, "lazy loading" is turned on, so we only need to specify this tag if we want to turn "lazy loading" off by using lazy="false". Many-to-many relationships The other relationship that is used quite often is the many-to -many (MTM) relationship. In the following example, the Contact_Phone table is used to join the Contact and Phone tables. NHibernate is smart enough to manage these MTM relationships for us, and we can "optimize out" the join table from our classes and just let NHibernate take care of it. Just like the one-to-many relationship, we represent the phones on the Contact class with a collection of Phone objects as follows: private IList<Phone> _phones; public IList<Phone> Phones { get { return _ phones; } set { _ phones = value; } } Mapping the MTM is very similar to the OTM, just a little more complex. We still use a bag and we still have a key. We need to add the table attribute to the bag element to let NHibernate know which table we are really storing the relationship data in. Instead of a one-to-many and a many-to-one attribute, both sides use a many-to-many element (makes sense, it is an MTM relationship, right?). The many-to-many element structure is the same as the one-to-many element, with a class attribute and a column child element to describe the relationship. <bag name="Phones" table="Contact_Phone" inverse="false" lazy="true" cascade="none"> <key> <column name="Contact_Id" length="4" sql-type="int" not-null="true"/> </key> <many-to-many class=" Phone"> <column name="Phone_Id" length="4" sql-type="int" not-null="true"/> </many-to-many> </bag> From the Phone side, it looks remarkably similar, as it's just the opposite view of the same relationship: <bag name="Contacts" table="Contact_Phone" inverse="false" lazy="true" cascade="none"> <key> <column name="Phone_Id" length="4" sql-type="int" not-null="true"/> </key> <many-to-many class=" Contact "> <column name="Contact_Id" length="4" sql-type="int" not-null="true"/> </many-to-many> </bag> Getting started This should be enough information to get us rolling on the path to becoming NHibernate superstars! Now that we have all of the primary fields mapped, let's map the Foreign Key fields. Time for action – mapping relationships If you look at the following database diagram, you will see that there are two relationships that need to be mapped, BillToContact and ShipToContact (represented by BillToContact_Id and ShipToContact_Id in the following screenshot). Let's map these two properties into our hbm.xml files. Open the OrderHeader.hbm.xml file, which should look something as follows: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping namespace="Ordering.Data" assembly="Ordering.Data"> <class name="OrderHeader" table="OrderHeader "> <id name="Id"> <column name="Id"/> <generator class="hilo"/> </id> <property name="Number" type="String"/> <property name="OrderDate" type="DateTime"/> <property name="ItemQty" type="Int32"/> <property name="Total" type="Decimal"/> </class> </hibernate-mapping> After the Total property, just before the end of the class tag (</class>), add a many-to-one element to map the BillToContact to the Contact class. <many-to-one name="BillToContact" class="Ordering.Data.Contact, Ordering.Data"> <column name="BillToContact_Id" /> </many-to-one> Next, open the Contact.hbm.xml file, which should look as follows: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping namespace="Ordering.Data" assembly="Ordering.Data"> <class name=" Contact " table="Contact"> <id name="Id"> <column name="Id"/> <generator class="hilo"/> </id> <property name="FirstName" type="String"/> <property name="LastName" type="String"/> <property name="Email" type="String"/> </class> </hibernate-mapping> After the Email property, just before the end of the class tag (</class>), add a one-to-many element to map the BillToOrderHeaders to the OrderHeader class. <bag name="BillToOrderHeaders" inverse="true" lazy="true" cascade="all-delete-orphan"> <key column="BillToContact_Id"/> <one-to-many class="OrderHeader "/> </bag> That's it! You just mapped your first one-to-many property! Your finished Contact.hbm.xml class should look as shown in the following screenshot: What just happened? By adding one-to-many and many-to-one child elements to the bag tag, we were able to map the relationships to the Contact object, allowing us to use dotted notation to access child properties of our objects within our code. Like the great cartographers before us, we have the knowledge and experience to go forth and map the world! Have a go hero – flushing out the rest of our map Now that you have some experience mapping fields and Foreign Keys from the database, why not have a go at the rest of our database! Start off with the Contact-to-Phone MTM table, and map the rest of the tables to the classes we created earlier, so that we will be ready to actually connect to the database
Read more
  • 0
  • 0
  • 5624

article-image-integrating-biztalk-server-and-microsoft-dynamics-crm
Packt
20 Jul 2011
7 min read
Save for later

Integrating BizTalk Server and Microsoft Dynamics CRM

Packt
20 Jul 2011
7 min read
What is Microsoft Dynamics CRM? Customer relationship management is a critical part of virtually every business. Dynamics CRM 2011 offers a solution for the three traditional areas of CRM: sales, marketing, and customer service. For customers interested in managing a sales team, Dynamics CRM 2011 has a strong set of features. This includes organizing teams into territories, defining price lists, managing opportunities, maintaining organization structures, tracking sales pipelines, enabling mobile access, and much more. If you are using Dynamics CRM 2011 for marketing efforts, then you have the ability to import data from multiple sources, plan campaigns and set up target lists, create mass communications, track responses to campaigns, share leads with the sales team, and analyze the success of a marketing program. Dynamics CRM 2011 also serves as a powerful hub for customer service scenarios. Features include rich account management, case routing and management, a built-in knowledge base, scheduling of call center resources, scripted Q&A workflows called Dialogs, contract management, and more. Besides these three areas, Microsoft pitches Dynamics CRM as a general purpose application platform called xRM, where the "X" stands for any sort of relationship management. Dynamics CRM has a robust underlying framework for screen design, security roles, data auditing, entity definition, workflow, and mobility, among others. Instead of building these foundational aspects into every application, we can build our data-driven applications within Dynamics CRM. Microsoft has made a big move into the cloud with this release of Dynamics CRM 2011. For the first time in company history, a product was released online (Dynamics CRM Online) prior to on-premises software. The hosted version of the application runs an identical codebase to the on-premises version meaning that code built to support a local instance will work just fine in the cloud. In addition to the big play in CRM hosting, Microsoft has also baked Windows Azure integration into Dynamics CRM 2011. Specifically, we now have the ability to configure a call-out to an Azure AppFabric Service Bus endpoint. To do this, the downstream service must implement a specific WCF interface and within CRM, the Azure AppFabric plugin is configured to call that downstream service through the Azure AppFabric Service Bus relay service. For BizTalk Server to accommodate this pattern, we would want to build a proxy service that implements the required Dynamics CRM 2011 interface and forwards requests into a BizTalk Server endpoint. This article will not demonstrate this scenario, however, as the focus will be on integrating with an onpremises instance only. Why Integrate Dynamics CRM and BizTalk Server? There are numerous reasons to tie these two technologies together. Recall that BizTalk Server is an enterprise integration bus that connects disparate applications. There can be a natural inclination to hoard data within a particular application, but if we embrace real-time message exchange, we can actually have a more agile enterprise. Consider a scenario when a customer's full "contact history" resides in multiple systems. The Dynamics CRM 2011 contact center may only serve a specific audience, and other systems within the company hold additional details about the company's customers. One design choice could be to bulk load that information into Dynamics CRM 2011 on a scheduled interval. However, it may be more effective to call out to a BizTalk Server service that aggregates data across systems and returns a composite view of a customer's history with a company. In a similar manner, think about how information is shared between systems. A public website for a company may include a registration page where visitors sign up for more information and deeper access to content. That registration event is relevant to multiple systems within the company. We could send that initial registration message to BizTalk Server and then broadcast that message to the multiple systems that want to know about that customer. A marketing application may want to respond with a personalized email welcoming that person to the website. The sales team may decide to follow up with that person if they expressed interest in purchasing products. Our Dynamics CRM 2011 customer service center could choose to automatically add the registration event so that it is ready whenever that customer calls in. In this case, BizTalk Server acts as a central router of data and invokes the exposed Dynamics CRM services to create customers and transactions. Communicating from BizTalk Server to Dynamics CRM The way that you send requests from BizTalk Server to Dynamics CRM 2011 has changed significantly in this release. In the previous versions of Dynamics CRM, a BizTalk "send" adapter was available for communicating with the platform. Dynamics CRM 2011 no longer ships with an adapter and developers are encouraged to use the WCF endpoints exposed by the product. Dynamics CRM has both a WCF REST and SOAP endpoint. The REST endpoint can only be used within the CRM application itself. For instance, you can build what is called a web resource that is embedded in a Dynamics CRM page. This resource could be a Microsoft Silverlight or HTML page that looks up data from three different Dynamics CRM entities and aggregates them on the page. This web resource can communicate with the Dynamics CRM REST API, which is friendly to JavaScript clients. Unfortunately, you cannot use the REST endpoint from outside of the Dynamics CRM environment, but because BizTalk cannot communicate with REST services, this has little impact on the BizTalk integration story. The Dynamics CRM SOAP API, unlike its ASMX web service predecessor, is static and operates with a generic Entity data structure. Instead of having a dynamic WSDL that exposes typed definitions for all of the standard and custom entities in the system, the Dynamics CRM 2011 SOAP API has a set of operations (for example, Create, Retrieve) that function with a single object type. The Entity object has a property identifying which concrete object it represents (for example, Account or Contract), and a name/value pair collection that represents the columns and values in the object it represents. For instance, an Entity may have a LogicalName set to "Account" and columns for "telephone1", "emailaddress", and "websiteurl." In essence, this means that we have two choices when interacting with Dynamics CRM 2011 from BizTalk Server. Our first option is to directly consume and invoke the untyped SOAP API. Doing this involves creating maps from a canonical schema to the type-less Entity schema. In the case of doing a Retrieve operation, we may also have to map the type-less Entity message back to a structured message for more processing. Below, we will walk through an example of this. The second option involves creating a typed proxy service for BizTalk Server to invoke. Dynamics CRM has a feature-rich Solution Development Kit (SDK) that allows us to create typed objects and send them to the Dynamics CRM SOAP endpoint. This proxy service will then expose a typed interface to BizTalk that operates as desired with a strongly typed schema. An upcoming exercise demonstrates this scenario. Which choice is best? For simple solutions, it may be fine to interact directly with the Dynamics CRM 2011 SOAP API. If you are updating a couple fields on an entity, or retrieving a pair of data values, the messiness of the untyped schema is worth the straightforward solution. However, if you are making large scale changes to entities, or getting back an entire entity and publishing to the BizTalk bus for more subscribers to receive, then working strictly with a typed proxy service is the best route. However, we will look at both scenarios below, and you can make that choice for yourself. Integrating Directly with the Dynamics CRM 2011 SOAP API In the following series of steps, we will look at how to consume the native Dynamics CRM SOAP interface in BizTalk Server. We will first look at how to query Dynamics CRM to return an Entity. After that, we will see the steps for creating a new Entity in Dynamics CRM. Querying Dynamics CRM from BizTalk Server In this scenario, BizTalk Server will request details about a specific Dynamics CRM "contact" record and send the result of that inquiry to another system.
Read more
  • 0
  • 0
  • 5601
article-image-running-tasks-asynchronously
Packt
29 Dec 2016
12 min read
Save for later

Running tasks asynchronously

Packt
29 Dec 2016
12 min read
In this article by Javier Fernández González, author of the book Java 9 Concurrency Cookbook - Second Edition we will cover how to run tasks asynchronously. When you execute ForkJoinTask in ForkJoinPool, you can do it in a synchronous or asynchronous way. When you do it in a synchronous way, the method that sends the task to the pool doesn't return until the task sent finishes its execution. When you do it in an asynchronous way, the method that sends the task to the executor returns immediately, so the task can continue with its execution. (For more resources related to this topic, see here.) You should be aware of a big difference between the two methods. When you use the synchronized methods, the task that calls one of these methods (for example, the invokeAll() method) is suspended until the tasks it sent to the pool finish their execution. This allows the ForkJoinPool class to use the work-stealing algorithm to assign a new task to the worker thread that executed the sleeping task. On the contrary, when you use the asynchronous methods (for example, the fork() method), the task continues with its execution, so the ForkJoinPool class can't use the work-stealing algorithm to increase the performance of the application. In this case, only when you call the join() or get() methods to wait for the finalization of a task, the ForkJoinPool class can use that algorithm. In addition to RecursiveAction and RecursiveTask classes, Java 8 introduced a new ForkJoinTask with the CountedCompleter class. With this kind of tasks you can include a completion action that will be executed when is launched and there is no child pending tasks. This mechanism is based in a method included in the class (the onCompletion() method) and a counter of pending tasks. This counter is initialized to zero by default and you can increment it when you need in an atomic way. Normally, you will increment this counter one by one when you launch a child task. Finally, when a task has finished is execution, you can try to complete the execution of the task and consequently, executes the onCompletion() method. If the pending count is bigger than zero, it is decremented by one. If it's zero, the onCompletion() method is executed and then the parent task is tried to complete. In this article, you will learn how to use the asynchronous methods provided by the ForkJoinPool and CountedCompleter classes for the management of tasks. You are going to implement a program that will search for files with a determined extension inside a folder and its subfolders. The CountedCompleter class you're going to implement will process the content of a folder. For each subfolder inside that folder, it will send a new task to the ForkJoinPool class in an asynchronous way. For each file inside that folder, the task will check the extension of the file and add it to the result list if it proceeds. When a task is completed, it will insert the result lists of all its child tasks in its result task. How to do it... Follow these steps to implement the example: Create a class named FolderProcessor and specify that it extends the CountedCompleter class parameterized with the List<String> type. public class FolderProcessor extends CountedCompleter<List<String>> { Declare the serial version UID of the class. This element is necessary because the parent class of the RecursiveTask class, the ForkJoinTask class, implements the Serializable interface. private static final long serialVersionUID = -1826436670135695513L; Declare a private String attribute named path. This attribute will store the full path of the folder this task is going to process. private String path; Declare a private String attribute named extension. This attribute will store the name of the extension of the files this task is going to look for. private String extension; Declare two List private attributes named tasks and resultList. We will use the first one to store all the child tasks launched from this task and the other one to store the list of results of this task. private List<FolderProcessor> tasks; private List<String> resultList; Implement one constructor for the class to initialize its attributes and its parent class. We declared this constructor as protected as it will only be used internally protected FolderProcessor (CountedCompleter<?> completer, String path, String extension) { super(completer); this.path=path; this.extension=extension; } We implement other public constructor to be used externally. As the task created by this constructor won't have parent task, we don't include this object as parameter. public FolderProcessor (String path, String extension) { this.path=path; this.extension=extension; } Implement the compute() method. As the base class of our task is the CountedCompleter class, the return type of this method is void. @Override protected void compute() { First, initialize the two list attributes. resultList=new ArrayList<>(); tasks=new ArrayList<>(); Get the content of the folder. File file=new File(path); File content[] = file.listFiles(); For each element in the folder, if there is a subfolder, create a new FolderProcessor object and execute it asynchronously using the fork() method. We use the first constructor of the class and pass the current task as the completer task of the new one. We also increment the counter of pending tasks using the addToPendingCount() method. if (content != null) { for (int i = 0; i < content.length; i++) { if (content[i].isDirectory()) { FolderProcessor task=new FolderProcessor(this, content[i].getAbsolutePath(), extension); task.fork(); addToPendingCount(1); tasks.add(task); Otherwise, compare the extension of the file with the extension you are looking for using the checkFile() method and, if they are equal, store the full path of the file in the list of strings declared earlier. } else { if (checkFile(content[i].getName())){ list.add(content[i].getAbsolutePath()); } } } If the list of the FolderProcessor subtasks has more than 50 elements, write a message to the console to indicate this circumstance. if (tasks.size()>50) { System.out.printf("%s: %d tasks ran.n",file.getAbsolutePath(),tasks.size()); } } Finally, try to complete the current task using the tryComplete() method: tryComplete(); } Implement the onCompletion() method. This method will be executed when all the child tasks (all the tasks that have been forked from the current task) have finished their execution. We add the result list of all the child tasks to the result list of the current task. @Override public void onCompletion(CountedCompleter<?> completer) { for (FolderProcessor childTask : tasks) { resultList.addAll(childTask.getResultList()); } } Implement the checkFile() method. This method compares if the name of a file passed as a parameter ends with the extension you are looking for. If so, the method returns the true value, otherwise it returns the false value. private boolean checkFile(String name) { return name.endsWith(extension); } Finally, implement the getResultList() method to return the result list of a task. The code of this method is very simple so it won't be included. Implement the main class of the example by creating a class named Main with a main() method. public class Main { public static void main(String[] args) { Create ForkJoinPool using the default constructor. ForkJoinPool pool=new ForkJoinPool(); Create three FolderProcessor tasks. Initialize each one with a different folder path. FolderProcessor system=new FolderProcessor("C:\Windows", "log"); FolderProcessor apps=new FolderProcessor("C:\Program Files","log"); FolderProcessor documents=new FolderProcessor("C:\Documents And Settings","log"); Execute the three tasks in the pool using the execute() method. pool.execute(system); pool.execute(apps); pool.execute(documents); Write to the console information about the status of the pool every second until the three tasks have finished their execution. do { System.out.printf("******************************************n"); System.out.printf("Main: Parallelism: %dn",pool.getParallelism()); System.out.printf("Main: Active Threads: %dn",pool.getActiveThreadCount()); System.out.printf("Main: Task Count: %dn",pool.getQueuedTaskCount()); System.out.printf("Main: Steal Count: %dn",pool.getStealCount()); System.out.printf("******************************************n"); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } } while ((!system.isDone())||(!apps.isDone())||(!documents.isDone())); Shut down ForkJoinPool using the shutdown() method. pool.shutdown(); Write the number of results generated by each task to the console. List<String> results; results=system.join(); System.out.printf("System: %d files found.n",results.size()); results=apps.join(); System.out.printf("Apps: %d files found.n",results.size()); results=documents.join(); System.out.printf("Documents: %d files found.n",results.size()); How it works... The following screenshot shows part of an execution of this example: The key of this example is in the FolderProcessor class. Each task processes the content of a folder. As you know, this content has the following two kinds of elements: Files Other folders If the task finds a folder, it creates another FolderProcessor object to process that folder and sends it to the pool using the fork() method. This method sends the task to the pool that will execute it if it has a free worker-thread or it can create a new one. The method returns immediately, so the task can continue processing the content of the folder. For every file, a task compares its extension with the one it's looking for and, if they are equal, adds the name of the file to the list of results. Once the task has processed all the content of the assigned folder, we try to complete the current task. As we explained in the introduction of this article, when we try to complete a task, the code of the CountedCompleter looks for the value of the pending task counter. If this value is bigger than 0, it decrease of that counter. On the contrary, if the value is 0, the task executes the onCompletion() method and then try to completes its parent task. In our case, when a task is processing a folder and it finds a subfolder, it creates a new child task, launch that task using the fork() method and increment the counter of pending tasks. So, when a task has processed all its content, the counter of pending tasks of the task will be equal to the number of child tasks we have launched. When we call the tryComplete() method, if the folder of the current task has subfolders, this call will decrease the number of pending tasks. Only when all its child tasks have been completed, its onCompletion() method is executed. If the folder of the current task hasn't got any subfolders, the counter of pending tasks will be zero and the onComplete() method will be called immediately and then it will try to complete its parent task. By this way, we create a tree of tasks from top to bottom that are completed from bottom to top. In the onComplete() method, we process all the result lists of the child tasks and add their elements in the result list of the current task. The ForkJoinPool class also allows the execution of tasks in an asynchronous way. You have used the execute() method to send the three initial tasks to the pool. In the Main class, you also finished the pool using the shutdown() method and wrote information about the status and the evolution of the tasks that are running in it. The ForkJoinPool class includes more methods that can be useful for this purpose. There's more... In this example we have used the addToPendingCount() method to increment the counter of pending tasks, but we have other methods we can use to change the value of this counter. setPendingCount(): This method establish the value of the counter of pending tasks. compareAndSetPendingCount(): This method receives two parameters. The first one is the expected value and the second one is the new value. If the value of the counter of pending tasks is equal to the expected value, establish its value to the new one. decrementPendingCountUnlessZero(): This method decrements the value of the counter of pending tasks unless it's equal to zero. The CountedCompleter class also includes other methods to manage the completion of the tasks. These are the most significant ones: complete(): This method executes the onCompletion() method independently of the value of the counter of pending tasks try to complete its completer (parent) task. onExceptionalCompletion(): This method is executed when the completeExceptionally() method has been called or the compute() method has thrown an Exception. Override this method to include your code to process those exceptions. In this example, you have used the join() method to wait for the finalization of tasks and get their results. You can also use one of the two versions of the get() method with this purpose: get(long timeout, TimeUnit unit): This version of the get() method, if the result of the task isn't available, waits the specified time for it. If the specified period of time passes and the result isn't yet available, the method returns a null value. The TimeUnit class is an enumeration with the following constants: DAYS, HOURS, MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS, and SECONDS. The join() method can't be interrupted. If you interrupt the thread that called the join() method, the method throws an InterruptedException exception. Summary In this article we learned how to use the asynchronous methods provided by the ForkJoinPool and CountedCompleter classes for the management of tasks. Resources for Article: Further resources on this subject: Why Bother? – Basic [article] Getting Started with Sorting Algorithms in Java [article] Saying Hello to Java EE [article]
Read more
  • 0
  • 0
  • 5592

article-image-python-testing-mock-objects
Packt
28 Dec 2010
13 min read
Save for later

Python Testing: Mock Objects

Packt
28 Dec 2010
13 min read
How to install Python Mocker Python Mocker isn't included in the standard Python distribution. That means that we need to download and install it. Time for action – installing Python Mocker At the time of this writing, Python Mocker's home page is located at http://labix.org/mocker, while its downloads are hosted at https://launchpad.net/mocker/+download. Go ahead and download the newest version, and we'll see about installing it. The first thing that needs to be done is to unzip the downloaded file. It's a .tar.bz2, which should just work for Unix, Linux, or OSX users. Windows users will need a third-party program (7-Zip works well: http://www.7-zip.org/) to uncompress the archive. Store the uncompressed file in some temporary location. Once you have the files unzipped somewhere, go to that location via the command line. Now, to do this next step, you either need to be allowed to write files into your Python installation's site-packages directory (which you are, if you're the one who installed Python in the first place) or you need to be using Python version 2.6 or higher. If you can write to site-packages, type $ python setup.py install If you can't write to site-packages, but you're using Python 2.6 or higher, type $ python setup.py install --user Sometimes, a tool called easy_install can simplify the installation process of Python modules and packages. If you want to give it a try, download and install setuptools from http://pypi.python.org/pypi/setuptools, according to the directions on that page, and then run the command easy_install mocker. Once that command is done, you should be ready to use Nose. Once you have successfully run the installer, Python Mocker is ready for use. What is a mock object in software testing? "Mock" in this sense means "imitation," and that's exactly what a mock object does. Mock objects imitate the real objects that make up your program, without actually being those objects or relying on them in any way. Instead of doing whatever the real object would do, a mock object performs predefined simple operations that look like what the real object should do. That means its methods return appropriate values (which you told it to return) or raise appropriate exceptions (which you told it to raise). A mock object is like a mockingbird; imitating the calls of other birds without comprehending them. We've already used one mock object in our earlier work when we replaced time.time with an object (in Python, functions are objects) that returned an increasing series of numbers. The mock object was like time.time, except that it always returned the same series of numbers, no matter when we ran our test or how fast the computer was that we ran it on. In other words, it decoupled our test from an external variable. That's what mock objects are all about: decoupling tests from external variables. Sometimes those variables are things like the external time or processor speed, but usually the variables are the behavior of other units. Python Mocker The idea is pretty straightforward, but one look at that mock version of time.time shows that creating mock objects without using a toolkit of some sort can be a dense and annoying process, and can interfere with the readability of your tests. This is where Python Mocker (or any of several other mock object toolkits, depending on preference) comes in. Time for action – exploring the basics of Mocker We'll walk through some of the simplest—and most useful—features of Mocker. To do that, we'll write tests that describe a class representing a specific mathematical operation (multiplication) which can be applied to the values of arbitrary other mathematical operation objects. In other words, we'll work on the guts of a spreadsheet program (or something similar). We're going to use Mocker to create mock objects to stand in place of the real operation objects. Create up a text file to hold the tests, and add the following at the beginning (assuming that all the mathematical operations will be defined in a module called operations): >>> from mocker import Mocker >>> import operations We've decided that every mathematical operation class should have a constructor accepting the objects representing the new object's operands. It should also have an evaluate function that accepts a dictionary of variable bindings as its parameter and returns a number as the result. We can write the tests for the constructor fairly easily, so we do that first (Note that we've included some explanation in the test file, which is always a good idea): We're going to test out the constructor for the multiply operation, first. Since all that the constructor has to do is record all of the operands, this is straightforward. >>> mocker = Mocker() >>> p1 = mocker.mock() >>> p2 = mocker.mock() >>> mocker.replay() >>> m = operations.multiply(p1, p2) >>> m.operands == (p1, p2) True >>> mocker.restore() >>> mocker.verify() The tests for the evaluate method are somewhat more complicated, because there are several things we need to test. This is also where we start seeing the real advantages of Mocker: Now we're going to check the evaluate method for the multiply operation. It should raise a ValueError if there are less than two operands, it should call the evaluate methods of all operations that are operands of the multiply, and of course it should return the correct value. >>> mocker = Mocker() >>> p1 = mocker.mock() >>> p1.evaluate({}) #doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(97.43) >>> mocker.replay() >>> m = operations.multiply(p1) >>> m.evaluate({}) Traceback (most recent call last): ValueError: multiply without at least two operands is meaningless >>> mocker.restore() >>> mocker.verify() >>> mocker = Mocker() >>> p1 = mocker.mock() >>> p1.evaluate({}) #doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(97.43) >>> p2 = mocker.mock() >>> p2.evaluate({}) #doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(-16.25) >>> mocker.replay() >>> m = operations.multiply(p1, p2) >>> round(m.evaluate({}), 2) -1583.24 >>> mocker.restore() >>> mocker.verify() If we run the tests now, we get a list of failed tests. Most of them are due to Mocker being unable to import the operations module, but the bottom of the list should look like this: Finally, we'll write some code in the operations module that passes these tests, producing the following: class multiply: def __init__(self, *operands): self.operands = operands def evaluate(self, bindings): vals = [x.evaluate(bindings) for x in self.operands] if len(vals) < 2: raise ValueError('multiply without at least two ' 'operands is meaningless') result = 1.0 for val in vals: result *= val return result Now when we run the tests, none of them should fail. What just happened? The difficulty in writing the tests for something like this comes(as it often does) from the need to decouple the multiplication class from all of the other mathematical operation classes, so that the results of the multiplication test only depend on whether multiplication works correctly. We addressed this problem by using the Mocker framework for mock objects. The way Mocker works is that you first create an object representing the mocking context, by doing something such as mocker = Mocker(). The mocking context will help you create mock objects, and it will store information about how you expect them to be used. Additionally, it can help you temporarily replace library objects with mocks (like we've previously done with time.time) and restore the real objects to their places when you're done. We'll see more about doing that in a little while. Once you have a mocking context, you create a mock object by calling its mock method, and then you demonstrate how you expect the mock objects to be used. The mocking context records your demonstration, so later on when you call its replay method it knows what usage to expect for each object and how it should respond. Your tests (which use the mock objects instead of the real objects that they imitate), go after the call to replay. Finally, after test code has been run, you call the mocking context's restore method to undo any replacements of library objects, and then verify to check that the actual usage of the mocks was as expected. Our first use of Mocker was straightforward. We tested our constructor, which is specified to be extremely simple. It's not supposed to do anything with its parameters, aside from store them away for later. Did we gain anything at all by using Mocker to create mock objects to use as the parameters, when the parameters aren't even supposed to do anything? In fact, we did. Since we didn't tell Mocker to expect any interactions with the mock objects, it will report nearly any usage of the parameters (storing them doesn't count, because storing them isn't actually interacting with them) as errors during the verify step. When we call mocker.verify(), Mocker looks back at how the parameters were really used and reports a failure if our constructor tried to perform some action on them. It's another way to embed our expectations into our tests. We used Mocker twice more, except in those later uses we told Mocker to expect a call to an evaluate method on the mock objects (i.e. p1 and p2), and to expect an empty dictionary as the parameter to each of the mock objects' evaluate call. For each call we told it to expect, we also told it that its response should be to return a specific floating point number. Not coincidentally, that mimics the behavior of an operation object, and we can use the mocks in our tests of multiply.evaluate. If multiply.evaluate hadn't called the evaluate methods of mock, or if it had called one of them more than once, our mocker.verify call would have alerted us to the problem. This ability to describe not just what should be called but how often each thing should be called is a very useful too that makes our descriptions of what we expect much more complete. When multiply.evaluate calls the evaluate method of mock, the values that get returned are the ones that we specified, so we know exactly what multiply.evaluate ought to do. We can test it thoroughly, and we can do it without involving any of the other units of our code. Try changing how multiply.evaluate works and see what mocker.verify says about it. Mocking functions Normal objects (that is to say, objects with methods and attributes created by instantiating a class) aren't the only things you can make mocks of. Functions are another kind of object that can be mocked, and it turns out to be pretty easy. During your demonstration, if you want a mock object to represent a function, just call it. The mock object will recognize that you want it to behave like a function, and it will make a note of what parameters you passed it, so that it can compare them against what gets passed to it during the test. For example, the following code creates a mock called func, which pretends to be a function that, when called once with the parameters 56 and hello, returns the number 11. The second part of the example uses the mock in a very simple test: >>> from mocker import Mocker >>> mocker = Mocker() >>> func = mocker.mock() >>> func(56, "hello") # doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(11) >>> mocker.replay() >>> func(56, "hello") 11 >>> mocker.restore() >>> mocker.verify() Mocking containers Containers are another category of somewhat special objects that can be mocked. Like functions, containers can be mocked by simply using a mock object as if it were a container during your example. Mock objects are able to understand examples that involve the following container operations: looking up a member, setting a member, deleting a member, finding the length, and getting an iterator over the members. Depending on the version of Mocker, membership testing via the in operator may also be available. In the following example, all of the above capabilities are demonstrated, but the in tests are disabled for compatibility with versions of Mocker that don't support them. Keep in mind that even though, after we call replay, the object called container looks like an actual container, it's not. It's just responding to stimuli we told it to expect, in the way we told it to respond. That's why, when our test asks for an iterator, it returns None instead. That's what we told it to do, and that's all it knows. >>> from mocker import Mocker >>> mocker = Mocker() >>> container = mocker.mock() >>> container['hi'] = 18 >>> container['hi'] # doctest: +ELLIPSIS <mocker.Mock object at ...> >>> mocker.result(18) >>> len(container) 0 >>> mocker.result(1) >>> 'hi' in container # doctest: +SKIP True >>> mocker.result(True) >>> iter(container) # doctest: +ELLIPSIS <...> >>> mocker.result(None) >>> del container['hi'] >>> mocker.result(None) >>> mocker.replay() >>> container['hi'] = 18 >>> container['hi'] 18 >>> len(container) 1 >>> 'hi' in container # doctest: +SKIP True >>> for key in container: ... print key Traceback (most recent call last): TypeError: iter() returned non-iterator of type 'NoneType' >>> del container['hi'] >>> mocker.restore() >>> mocker.verify() Something to notice in the above example is that during the initial phase, a few of the demonstrations (for example, the call to len) did not return a mocker.Mock object, as we might have expected. For some operations, Python enforces that the result is of a particular type (for example, container lengths have to be integers), which forces Mocker to break its normal pattern. Instead of returning a generic mock object, it returns an object of the correct type, although the value of the returned object is meaningless. Fortunately, this only applies during the initial phase, when you're showing Mocker what to expect, and only in a few cases, so it's usually not a big deal. There are times when the returned mock objects are needed, though, so it's worth knowing about the exceptions.
Read more
  • 0
  • 0
  • 5567
Modal Close icon
Modal Close icon