Subscribe: IBM developerWorks : Information Mgmt : Tutorials
http://www.ibm.com/developerworks/views/db2/rss/libraryview.jsp?type_by=Tutorials
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
application  applications  certification exam  data  database  ibm  informix  infosphere  linux unix  part  server  tutorial 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: IBM developerWorks : Information Mgmt : Tutorials

IBM developerWorks : Information management : Tutorials



The latest content from IBM developerWorks



Published: 30 Mar 2017 14:32:25 +0000

Copyright: Copyright 2004 IBM Corporation.
 



Dropped object recovery: A DBA's nightmare

16 Jun 2016 04:00:00 +0000

Dropped objects in a DB2 for z/OS environment can be a database administrator's (DBA) nightmare. Not only is the data lost, but traditional recovery methods do not work in this situation. This article examines the dropped object dilemma and gives you four steps you can take to recover lost data and objects.



Apply middleware maintenance to patterns and instances in IBM PureApplication System

26 Apr 2016 04:00:00 +0000

Learn how to apply middleware maintenance on IBM PureApplication System by using IBM Installation Manager. In this video, you go through the contents and structure of the IBM Installation Manager Repository. Then, you learn how to apply emergency fix packs and content from the IBM Installation Manager repository to patterns and deployed pattern instances.



Using IBM Database Add-ins for Visual Studio 2013 in DB2 Cancun (10.5 Fix Pack 4)

16 Jul 2015 04:00:00 +0000

This tutorial explains the key new capabilities in IBM Database Add-ins for Visual Studio 2013 available with the DB2 10.5 Fix Pack 4. The authors explain support of the Microsoft Visual Studio 2013 feature set with IBM data servers (DB2 for z/OS; DB2 for i; DB2 for Linux, UNIX, and Windows; and Informix).



Data integration and analytics as a service, Part 1: DataWorks

10 Jul 2015 04:00:00 +0000

Most data integration specialists find that data loading and migration from a source to target are usually time-consuming and tedious tasks to perform. Now with the IBM Bluemix DataWorks service, you can load and migrate data from different sources to different targets easily. IBM DataWorks service, which includes DataWorks APIs and DataWorks Forge, allows developers to load, cleanse and profile data, in addition to migrating to different targets seamlessly. DataWorks Forge is primarily for knowledge workers and helps them to select data, visualize, and prepare it for use after enriching and improving its quality. This tutorial is Part 1 of a series covering data integration and analytics as a service.



Deploying DB2 pureScale Feature 10.5.0.3 on AIX with RDMA over Converged Ethernet

12 Mar 2015 04:00:00 +0000

This tutorial is intended to show users how they can deploy DB2 pureScale on AIX using Remote Direct Memory Access (RDMA) over converged Ethernet. The step-by-step guide will provide details of a sample deployment that could be replicated to allow for other successful pureScale deployments.



Improve performance of product search in InfoSphere MDM Collaborative Edition

05 Mar 2015 05:00:00 +0000

This tutorial explains how to leverage the XML support in InfoSphere MDM Collaborative Edition to improve the search performance for a customer solution. We will compare the difference between the various search options in the product and demonstrate how to index XML data of MDM entries to optimize the search query to search MDM entries using XQuery. Sample application source code is included.



Connect your apps to DB2 with high-security Kerberos

02 Mar 2015 05:00:00 +0000

This tutorial is a primer to help programmers using IBM Data Server Drivers get applications quickly running in a Kerberos environment. We will be setting up a simple Kerberos environment on Windows, configuring DB2 to use Kerberos authentication, and enabling the client drivers to securely authenticate using Kerberos.



Leverage DB2 Connect for insert operations in existing C/C++ IBM Data Server applications

20 Feb 2015 05:00:00 +0000

This tutorial explains the key best practices when developing C/C++ applications against the IBM Data Servers; (DB2 for z/OS; DB2 for i; DB2 for Linux, UNIX, and Windows; and Informix). It provides details for leveraging several of the features in DB2 Connect that pave the way for better performance and align with best-practice recommendations. You can use this information while developing/enhancing existing applications in C/C++ targeting IBM Data Servers.



Optimizing cloud applications with DB2 stored procedures

12 Feb 2015 05:00:00 +0000

This tutorial describes the IBM DB2 stored procedure framework, methods to monitor stored procedure performance, and methods to optimize stored procedure performance. DB2 provides a routine monitoring framework that helps pinpoint the statements or parts of the procedure code that can be tuned for better performance. The tutorial also describes good practices for writing DB2 SQL/PL and Oracle PL/SQL procedures and simple way of migrating Oracle PL/SQL procedures to DB2.



Setting up your DB2 subsystem for query acceleration with DB2 Analytics Accelerator for z/OS

05 Feb 2015 05:00:00 +0000

Adding IBM DB2 Analytics Accelerator for z/OS to the DB2 for z/OS environments has enabled companies in a variety of industries from major banks and retailers to IT services and healthcare providers to significantly improve query processing and increase analytics capabilities. Providing efficiency and cost-effectiveness, DB2 Analytics Accelerator can process certain types of eligible queries, especially business intelligence queries, faster than DB2.



Data purge algorithm: Efficiently delete terabytes of data from DB2 for Linux, UNIX, and Windows

29 Jan 2015 05:00:00 +0000

Big data introduces data storage and system performance challenges. Keeping your growing tables small and efficient improves system performance as the smaller tables and indices are accessed faster; all other things being equal, a small database performs better than a large one. While traditional data purge techniques work well for smaller databases, they fail as the database size scales up into a few terabytes. This tutorial will discuss an algorithm to efficiently delete terabytes of data from the DB2 database.



IBM Datacap 9.0, 9.0.1, and 9.1.0 DDK: Customizing ruleset configuration panels for FastDoc and Datacap Studio

28 Jan 2015 05:00:00 +0000

IBM Datacap provides ruleset configuration panels, which are used at application design time in FastDoc and Datacap Studio, allowing easy ruleset configuration by providing a UI that prompts the user for configuration settings and then creates the appropriate ruleset XML. Additional custom ruleset panels can be created using the provided Visual Studio C# template.



Use a SQL interface to handle JSON data in DB2 11 for z/OS

22 Jan 2015 05:00:00 +0000

This tutorial focuses on a SQL interface recently introduced in DB2 11 for z/OS that allows extraction and retrieval of JSON data from BSON objects and conversion from JSON to BSON. With this new feature, users can manage JSON data without relying on DB2 NoSQL JSON APIs. Instead, SQL interfaces can be used for JSON manipulation. Learn about the setup/configuration and get illustrations for common JSON usage inside DB2 11 for z/OS. Hints and tips are provided to improve performance and prevent potential pitfalls.



Add and detach data partitions from DB2 partitioned tables using InfoSphere DataStage

15 Jan 2015 05:00:00 +0000

Table partitioning has benefits like the ability to roll-in (including data in a table) and roll-out (detaching data from a table) and improving query performances particularly in Data warehouse and decision support system environment. These activities are mostly performed manually by the DBAs but in a real-time DWH where there is continuous flow of data via ETL tools, too many manual interventions is unwanted and can impact the ETL running process. The article discusses how ETL process can be used for adding empty partitions to a partitioned table and also removing unwanted partitions from the table without any manual or DBA intervention he or she can gain from reading the content.



Unit test SQL PL routines by using db2unit framework in DB2 LUW

09 Jan 2015 05:00:00 +0000

SQL PL routines stored in DB2 databases contain business logic that can be used by all applications that access the database. However, these routines are not always completely tested. Until now, no standard procedure existed to perform automatic tests against them. With the incursion of test-driven development from the eXtreme Programming paradigm, it is important to write test cases that check all possible conditions. Doing so even before starting development ensures the quality of the software. db2unit is a framework that helps you follow these guidelines by writing better code and creating more reliable applications. Learn how to use the innovative framework that, automating unit tests for the SQL PL routines.



Establish an information governance policy framework in InfoSphere Information Governance Catalog

18 Dec 2014 05:00:00 +0000

With the substantial growth in data volume, velocity, and variety comes a corresponding need to govern and manage the risk, quality, and cost of that data and provide higher confidence for its use. This is the domain of information governance, but it is a domain that many people struggle with in how to get started. This article provides a starting framework for information governance built around IBM InfoSphere Information Governance Catalog.



Build a DB2 CLI console to manage SQLDB databases

11 Dec 2014 05:00:00 +0000

Manage your SQLDB databases with ease, using an application you can quickly build and deploy on the IBM cloud platform, Bluemix.



Build a simple catalog management application for e-commerce

25 Nov 2014 05:00:00 +0000

In this tutorial, we will demonstrate a catalog management system for e-commerce solutions through an application on top of InfoSphere MDM Collaborative Edition and WebSphere Commerce Enterprise Edition. The application will provide a simple solution for managing catalog data through a web-based UI. It will model the catalog data for e-commerce products in MDM, provide collaborative environment for authoring catalog entries including products, SKUs, and bundles and kits, which can be organized and filtered in hierarchical catalog categories. This tutorial will guide readers through the process of developing their own solutions with the new features released in the Advanced Catalog Management Asset in InfoSphere MDM.



Configure and monitor SAP applications with the OMEGAMON DB2 Performance Expert Extended Insight feature

13 Nov 2014 05:00:00 +0000

Learn the details about the installation and configuration of the IBM Tivoli OMEGAMON for DB2 Performance Expert on z/OS Extended Insight feature in an SAP environment running on DB2 z/OS. This Tutorial includes also troubleshooting advice.



Protect sensitive Hadoop data using InfoSphere BigInsights Big SQL and InfoSphere Guardium

10 Nov 2014 05:00:00 +0000

Major advantages of using Big SQL, the SQL interface to Hadoop data within InfoSphere BigInsights, are its enterprise-ready capability for speed, functionality, and security. This tutorial provides a brief overview of the built-in security capabilities of Big SQL and then goes into greater depth to highlight the integration with InfoSphere Guardium, which provides automated compliance reporting, real-time alerting, dynamic data masking, and much more.



Increase throughput with z/OS Language Environment heap storage tuning method

23 Oct 2014 04:00:00 +0000

The z/OS Language Environment (LE) component provides a common runtime environment for the IBM version of certain high-level languages. LE provides runtime options that can be customized according to the programs behaviors to achieve better execution performance. This paper puts forward an LE heap storage tuning method for IBM's InfoSphere Data Replication for DB2 for z/OS (Q Replication). The tuning reduces contentions of concurrent heap storage allocation requests among multiple threads of the Q Capture program and Q Apply program of Q Replication for z/OS while keeping the heap storage overall allocation to a minimum. After applying the heap tuning techniques outline in this paper, a notable 13% throughput performance was achieved for OLTP type workloads and CPU reduction was noticed for all workload types



InfoSphere Guardium and the Amazon cloud, Part 2: Secure storage on the cloud for backup and restore

23 Oct 2014 04:00:00 +0000

The growing number of relational databases on the cloud accentuates the need for data protection and auditing. InfoSphere Guardium offers real-time database security and monitoring, fine-grain database auditing, automated compliance reporting, data-level access control, database vulnerability management, and auto-discovery of sensitive data in the cloud. With the Amazon Relational Database Service (RDS), you can create and use your own database instances in the cloud and build your own applications around them. This two-part series explores how to use InfoSphere Guardium to protect database information in the cloud. Part 1 describes how to use InfoSphere Guardium's discovery and vulnerability assessment with Amazon RDS instances. This tutorial covers how InfoSphere Guardium uses Amazon S3 for backup and restore.



Use industry templates for advanced case management, Part 1: Introducing the Credit Card Dispute Management sample solution template for IBM Case Manager

16 Oct 2014 04:00:00 +0000

IBM Case Manager provides the platform and tools for a business analyst to define and implement a new generation of case management solutions. To accelerate the development of solutions in particular industries, IBM Case Manager supports the notion of a solution template, which is a collection of case management assets that can be customized and extended to build a complete solution. To illustrate the value of solution templates and the features of IBM Case Manager, IBM has provided two sample solution templates that can be used as learning tools for users new to the platform. This tutorial introduces one of those templates: Credit Card Dispute Management from the financial services industry. This sample template can serve as a foundation for clients who want to build a similar solution. The template can also serve as a learning tool and reference for clients to build other solutions in other industries.



Using temporal tables in DB2 10 for z/OS and DB2 11 for z/OS

16 Oct 2014 04:00:00 +0000

Temporal tables were introduced in IBM DB2 10 for z/OS and enhanced in V11. If you have to maintain historical versions of data over several years, temporal tables can be helpful for period-based data. In this tutorial, explore how your applications can use temporal tables to manage different versions of data, simplify service logic, and provide information for auditing. Learn about when and how to use three types of temporal tables to manage period-based data.



Use industry templates for advanced case management, Part 2: Introducing the Auto Claims Management sample solution template for IBM Case Manager

16 Oct 2014 04:00:00 +0000

IBM Case Manager provides the platform and tools for business analysts to define and implement a new generation of case management solutions. To accelerate the development of solutions in particular industries, IBM Case Manager supports the notion of a solution template a collection of case management assets that can be customized and extended to build a complete solution. To help illustrate the value of solution templates and the abilities of IBM Case Manager, IBM has provided two sample solution templates that can be used as learning tools for new users of the platform. This tutorial introduces one of those templates Auto Claims Management from the insurance services industry. Gain an understanding of what a template is, and learn about the assets delivered in this sample template and how they were built. (This tutorial includes the code for this sample template as well as instructions on how to deploy it.)



Using the MDM Application Toolkit to build MDM-centric business processes, Part 5: Security

09 Oct 2014 04:00:00 +0000

This is the fifth article in a series that describes how to create process applications for master data by using IBM Business Process Manager (BPM). This series refers to the InfoSphere Master Data Management (MDM) Application Toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This tutorial guides you through several security issues when creating MDM processes using the Application Toolkit. Learn about managing security issues when connecting to an MDM server, enabling encrypted flows between your process and MDM, certificate management, and restricting the REST service to HTTPS.



DB2 Connect: Get the most from new features in DB2 Cancun 10.5.0.4

02 Oct 2014 04:00:00 +0000

DB2 Connect in DB2 Cancun Release 10.5.0.4 includes many rich features. Get a high-level overview of the key features across the various client drivers for DB2, including Java driver and non-Java drivers (CLI and .NET). Learn about the practical application of the new DB2 Connect features that provide big returns. Key features can help alleviate several business problems. The information in this tutorial will be useful when deciding on release upgrades.



Monitor your database without logging

25 Sep 2014 04:00:00 +0000

Jose Bravo demonstrates how to set up the integration between IBM Security QRadar SIEM and IBM Guardium to create an efficient, low-impact database monitoring solution. He then walks through a typical use case scenario where an unauthorized transaction on a database is detected and raised as a security offense in the QRadar SIEM.



Improve performance of mixed OLTAP workloads with DB2 shadow tables

25 Sep 2014 04:00:00 +0000

DB2 with BLU acceleration has introduced a new innovative feature to make analytics faster and simpler. Learn how Shadow tables utilize BLU acceleration technologies to improve performance of analytic queries within your OLTP environment. Experience the power of complex reporting on real-time data in a single database. The goal of this article is to introduce you to the power of shadow tables and walk you through the simple steps of setting your environment for their use.



Configure multiple HADR databases in a DB2 instance for automated failover using Tivoli System Automation for Multiplatforms

22 Sep 2014 04:00:00 +0000

Learn how to enable automated failover support using IBM Tivoli System Automation for Multiplatforms for multiple databases configured for High Availability Disaster Recovery in a single DB2 instance. Walk through scenarios that use db2haicu in interactive mode and with an XML file as input. The example setup is for DB2 Enterprise Server Edition environments in Linux or AIX with DB2 9.5 and higher.



Configure a complete query and workload tuning cycle with InfoSphere Optim Performance Manager V5.3.1

16 Sep 2014 04:00:00 +0000

With the InfoSphere Optim Performance Manager V5.3.1 web console, you can configure your monitored databases for a complete tuning cycle for single queries or workloads for DB2 for Linux, UNIX and Windows, and DB2 for z/OS data servers. You do not have to install Data Studio for single-query or workload tuning. Examples in this tutorial walk you through single-query tuning and workload tuning enhancements.



Developing behavior extensions for InfoSphere MDM

12 Sep 2014 04:00:00 +0000

One of the most fundamental extension mechanisms of InfoSphere Master Data Management (MDM) allows for the modification of service behavior. These extensions are commonly referred to as behavior extensions, and the incredible flexibility they provide allows for organizations to implement their own "secret sauce" to the 700+ business services provided out of the box with InfoSphere MDM. The purpose of this tutorial is to introduce you to behavior extensions and guide you through the implementation, testing, packaging, and deployment of these extensions. You will be introduced to the Open Service Gateway initiative (OSGi)-based extension approach in InfoSphere MDM Workbench Version 11.



Enhanced development with OSGi, composite bundles, and InfoSphere Master Data Management operational server 11.x

08 Sep 2014 04:00:00 +0000

This tutorial walks through best practices for optimal development with the InfoSphere Master Data Management (MDM) operational server. Explore common OSGi patterns, how to best deploy MDM composite bundle (CBA) extensions, and how to troubleshoot failures.



Develop an IoT application on Bluemix with Arduino and Rails

28 Aug 2014 04:00:00 +0000

In this tutorial, we show you how to develop an application that uses technologies applicable to the Internet of Things (IoT). Our application collects data from an accelerometer, stores it on a web server, then displays the result in real time on a web page.



Best practices for IBM InfoSphere Blueprint Director, Part 3: Sharing Information Architectures through InfoSphere Blueprint Director

06 Jun 2013 04:00:00 +0000

This article provides best practices on publishing information architecture blueprints using IBM InfoSphere Blueprint Director. Publishing architecture blueprints enables sharing of the most current solution architecture with all team members allowing everyone to experience the same project vision.



IBM Accelerator for Machine Data Analytics, Part 5: Speeding up analysis of structured data together with unstructured data

28 May 2013 04:00:00 +0000

Previously in this series, you created a searchable repository of semi-structured and unstructured data -- namely, Apache web access logs, WebSphere logs, Oracle logs, and email data. In this tutorial, you will enrich the repository with structured data exported from a customer database. Specifically, you will search across structured customer information and semi-structured and unstructured logs and emails, and perform analysis using BigSheets to identify which customers who emailed Sample Outdoors Company during the July 14th outage were more loyal than others.



IBM Accelerator for Machine Data Analytics, Part 3: Speeding up machine data searching

31 Jan 2013 05:00:00 +0000

Machine logs from diverse sources are generated in an enterprise in voluminous quantities. IBM Accelerator for Machine Data Analytics simplifies the task of implementation required so analysis of semi-structured, unstructured or structured textual data is accelerated.



DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 prep, Part 2: Physical design

31 Jan 2013 05:00:00 +0000

This tutorial discusses the creation of IBM DB2/reg> databases, as well as various methods used for placing and storing objects within a database. The focus is on partitioning, compression, and XML, which are all important performance and application development concepts you need to store and access data quickly and efficiently. This is second in a series of eight tutorials you can use to help you prepare for the DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611. The material in this tutorial primarily covers the objectives in Section 2 of the exam.



Customize the InfoSphere Master Data Management classic party search capability

24 Jan 2013 05:00:00 +0000

In this tutorial, use the data extension and pre-written SQL capabilities to create a custom search object and enable the use of the new party attributes as part of a classic party search service. You will use a scenario in which the search criteria to be added, as part of the pluggable SQL, are not available in the out-of-the box party search object. Therefore, additional Java development effort is required.



Resource description framework application development in DB2 10 for Linux, UNIX, and Windows, Part 1: RDF store creation and maintenance

23 Jan 2013 05:00:00 +0000

The Resource Description Framework (RDF) is a family of W3 specification standard that enables the exchange of data and metadata. Using DB2 10 for Linux, UNIX, and Windows Enterprise Server Edition, applications can store and query RDF data. This tutorial walks you through the steps of building and maintaining a sample RDF application. During this process you will learn hands-on how to use DB2 software in conjunction with RDF technology.



IBM Accelerator for Machine Data Analytics, Part 2: Speeding up analysis of new log types

17 Jan 2013 05:00:00 +0000

Machine logs from diverse sources are generated in an enterprise in voluminous quantities. IBM Accelerator for Machine Data Analytics simplifies the task of implementation required so analysis of semi-structured, unstructured or structured textual data is accelerated.



DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 prep, Part 7: Security

06 Dec 2012 05:00:00 +0000

This tutorial introduces the concepts of authentication, authorities, privileges, audit facility, trusted context, RCAC, and LBAC as they relate to DB2 10. It is the seventh in a series of tutorials designed to help you prepare for the DB2 10.1 for Linux, UNIX, and Windows Database Administration (exam 611). You should have basic knowledge of database concepts and operating system security.



System Administration Certification exam 919 for Informix 11.70 prep, Part 6: Informix Data Warehousing

06 Dec 2012 05:00:00 +0000

In this tutorial, you'll learn about IBM Informix Data warehousing concepts and the tools that you can use to create data warehouses and optimize your data warehouse queries. This tutorial prepares you for Part 7 of the System Administration Certification exam 919 for Informix v11.70.



Policy monitoring reports security setup with InfoSphere Master Data Management and Tivoli Directory Server

29 Nov 2012 05:00:00 +0000

The Policy Monitoring component is introduced in IBM's InfoSphere Master Data Management (MDM) v10.1 release. Using IBM Cognos Business Intelligence reporting tools, Policy Monitoring enables organizations to report on data quality by using aggregated metrics and to establish policies for compliance with data quality thresholds. This tutorial provides detailed steps to set up a basic security model in IBM Cognos Business Intelligence for providing authentication and authorization for Policy Monitoring reports.



DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 prep, Part 4: Monitoring DB2 activity

15 Nov 2012 05:00:00 +0000

This tutorial introduces you to the set of monitoring tools that are available with DB2 10.1 and to show you how each is used to monitor how well (or how poorly) your database system is operating. This is the fourth tutorial in a of eight that you can use to help prepare for Part 4 of the DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611.



Use IBM InfoSphere Information Server to transform legacy data into information services

12 Nov 2012 05:00:00 +0000

Learn how to create and deploy information Sservices to access legacy databases without writing any code. The generated Web services are created using the IBM Information Server components including InfoSphere DataStage, InfoSphere Federation Server, InfoSphere Information Services Director, and WebSphere Transformation Extender for DataStage. In this example, the information services are delivered using a standard government XML model (GJXDM).



Use IBM InfoSphere Information Server to transform legacy data into information services

12 Nov 2012 05:00:00 +0000

Learn how to create and deploy information Sservices to access legacy databases without writing any code. The generated Web services are created using the IBM Information Server components including InfoSphere DataStage, InfoSphere Federation Server, InfoSphere Information Services Director, and WebSphere Transformation Extender for DataStage. In this example, the information services are delivered using a standard government XML model (GJXDM).



Use IBM InfoSphere Optim Query Workload Tuner 3.1.1 to tune statements in DB2 for Linux, UNIX, and Windows, and DB2 for z/OS that reference session tables

08 Nov 2012 05:00:00 +0000

IBM InfoSphere Optim Query Workload Tuner (OQWT) 3.1.1 can tune statements for IBM DB2 for Linux, UNIX, and Windows, and IBM DB2 for z/OS. This document describes how to use OQWT to tune a statement that accesses one or more session tables. Two methods are presented on how to set up the database environment for the session table such that OQWT 3.1.1 can tune statements using the table. Examples are provided for a script that is required to set up the environment, including example snapshots of the output and functionality of the applicable OQWT tuning features.



DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 prep, Part 3: Business rules implementation

08 Nov 2012 05:00:00 +0000

This tutorial is designed to introduce you to the skills you must possess to implement business rules in a DB2 database environment. This tutorial will also help you prepare for Section 3 of the DB2 10.1 for Linux, UNIX, and Windows Database Administration certification exam (Exam 611).



DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 prep, Part 8: Connectivity and networking

25 Oct 2012 04:00:00 +0000

This tutorial is created to provide you with the process of configuring communications and the processes of cataloging databases, remote servers (nodes), and Database Connection Services (DCS) databases. You will also get introduced to DB2 Discovery and learn how to manage connections to System z and System i host databases. You will also learn about Lightweight Directory Access Protocol (LDAP). This tutorial prepares you for Part 8 of the DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611.



DB2 10.1 Fundamentals certification exam 610 prep: Part 5: Working with tables, views, and indexes

25 Oct 2012 04:00:00 +0000

This tutorial discusses IBM DB2 10.1 support for data types, tables, views, triggers, constraints and indexes. It explains the features of these objects, how to create and manipulate them using Structured Query Language (SQL), and how they can be used in an application. This tutorial is the fifth in a series that you can use to help prepare for the DB2 10.1 Fundamentals certification exam 610.



DB2 10.1 fundamentals certification exam 610 prep, Part 3: Working with databases and database objects

18 Oct 2012 04:00:00 +0000

This tutorial shows you the basic steps and requirements to create and connect to a database in DB2 10.1. Also, this tutorial introduces you to the objects that make up a DB2 database as well as how to create and manipulate them. This tutorial prepares you for Part 3 of the DB2 10.1 fundamentals certification exam 610.



DB2 10.1 fundamentals certification exam 610 prep, Part 1: Planning

18 Oct 2012 04:00:00 +0000

This tutorial introduces you to the basics of the DB2 10.1 product editions, functionalities and tools, along with underlying concepts that describe different types of data applications such as OLTP, data warehousing / OLAP, non-relational concepts and more. It will briefly introduce you to many of the concepts you’ll see in the other tutorials in this series, helping you to prepare for the DB2 10.1 Fundamentals certification test 610.



DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 prep, Part 5: DB2 utilities

11 Oct 2012 04:00:00 +0000

Learn skills that help you to properly manage your DB2 database servers. This is the fifth in a series of eight tutorials to help you prepare for the DB2 10.1 for Linux, UNIX, and Windows Database Administration (Exam 611).



Resource description framework application development in DB2 10 for Linux, UNIX, and Windows, Part 2: Optimize your RDF data stores in DB2 and provide fine-grained access control

04 Oct 2012 04:00:00 +0000

The Resource Description Framework (RDF) is a family of W3 specification standards that enables the exchange of data and metadata. Using IBM DB2 10 for Linux, UNIX, and Windows Enterprise Server Edition, applications can store and query RDF data. This tutorial looks at the characteristics of RDF data and describes the process for creating optimized stores. In addition, it describes how to provide fine-grained access control to RDF stores using either the DB2 engine or the application. It includes a sample application.



IBM InfoSphere Optim Query Capture and Replay 1.1 for Linux, UNIX, and Windows, Part 1: Introduction to OQCR

20 Sep 2012 04:00:00 +0000

IBM InfoSphere Optim Query Capture and Replay (IOQCR) 1.1 for Linux, UNIX and Windows enable an organization to create a production-like data application test environment where changes can be tested and tuned before being deployed into production. InfoSphere Optim Query Capture and Replay captures all of the application workload running against a production database and replays it against a test database without the need to replicate the entire application infrastructure. It not only replays both dynamic and static SQL, but also reproduces the number of client connections and their properties, the timing and order of execution, transaction boundaries and isolation levels, and many other critical features of workload execution. The result is a much closer approximation of production workloads and greater confidence that when changes are deployed, they will not disrupt production.



DB2 10.1 fundamentals certification exam 610 prep, Part 6: Data concurrency

30 Aug 2012 04:00:00 +0000

This tutorial is designed to introduce you to the concept of data consistency and to the mechanisms DB2uses to maintain data consistency in both single- and multi-user database environments. This tutorial will also help you prepare for Section 6 of the DB2 10.1 Fundamentals certification exam (Exam 610).



System Administration Certification exam 919 for Informix 11.70 prep, Part 7: Security

20 Jun 2012 04:00:00 +0000

Data security is always a concern for database administrators. This tutorial helps you understand how to secure your data by preventing unauthorized viewing and altering of data or database objects, including how to use the secure-auditing facility of the database server to monitor database activities. This tutorial prepares you for Part 8 of the System Administration Certification exam 919 for Informix(R) v11.70.



System Administration Certification exam 919 for Informix 11.70 prep, Part 2: Informix space management

10 May 2012 04:00:00 +0000

In this tutorial, you'll learn how to configure and manage storage spaces on IBM Informix(R) database, the utilities to create those storage spaces, and how to use fragmentation and features to optimize the storage in the database. This tutorial prepares you for Part 2 of the System Administration Certification exam 919 for Informix v11.70.



Exploring IMS disaster recovery solutions, Part 4: Coordinated IMS and DB2 solutions

03 May 2012 04:00:00 +0000

Every customer needs a disaster recovery (DR) plan. The strategy will differ from one customer to the next. For IMS, there are two types of DR solutions: 1) IMS specific, and 2) Storage Mirroring. In this tutorial, we explore the IMS specific DR solutions. There are solutions that use only the IMS base product and solutions that use the IBM IMS Tools products. For each DR solution, there will be a discussion of the key concepts related to that solution.



Create an ER diagram for an Informix database

26 Apr 2012 04:00:00 +0000

Understanding the structure of a database is important to administrators as well as to developers. An image illustrates more details within seconds than any textual listing can deliver. This tutorial demonstrates how to use a tool that can deliver an entity-relationship graphical overview of an existing Informix relational database and the contained objects.



Exploring IMS disaster recovery solutions, Part 3: IMS Recovery Expert solutions

19 Apr 2012 04:00:00 +0000

Every customer needs a Disaster Recovery (DR) plan. The strategies used differ from one customer to another and they differ in time to recovery and loss of data. For IMS, there are five types of DR solutions: restart, recovery, recovery and restart, coordinated IMS and DB2 restart, and coordinated IMS and DB2 disaster recovery and restart. Here in Part 3, we explore both the recovery and recovery and restart solutions provided by the IMS Recovery Expert product.



Exploring IMS disaster recovery solutions, Part 2: IMS Base and IMS Tools recovery solutions

12 Apr 2012 04:00:00 +0000

Every customer needs a Disaster Recovery (DR) plan. The strategies used differ from one customer to another and they differ in time to recovery and loss of data. For IMS, there are five types of DR solutions: restart, recovery, recovery and restart, coordinated IMS and DB2 restart, and coordinated IMS and DB2 disaster recovery and restart. Here in Part 2, we explore the recovery solutions that use only the IMS base functions and some of the functions in the IMS Tools.



System Administration Certification exam 919 for Informix 11.70 prep, Part 5: Informix backup and restore

22 Mar 2012 04:00:00 +0000

In this tutorial, you'll learn about IBM Informix(R) database backup and restore concepts and strategies, and you'll learn about utilities and commands for managing your database backup and restore processes. In addition, learn how to monitor your backups and perform problem determination when necessary. This tutorial prepares you for Part 5 of the System Administration Certification exam 919 for Informix v11.70.



System Administration Certification exam 919 for Informix 11.70 prep, Part 4: Performance tuning

01 Mar 2012 05:00:00 +0000

Tune IBM Informix(R) database server and its different subsystems for optimum performance. After an overview, follow along with examples on how to look at the database server and its subsystems. Learn about important database optimization elements, including checkpoints, recovery, physical logging, logical logging, asynchronous I/O VP, network parameters, disk resources, CPU VP resources, PDQ, memory grant manager, scan threads, index creation, statistics maintenance, and self tuning. Use this tutorial, the fourth in a series of eight tutorials, to help prepare for Part 4 of the Informix 11.70 exam 919.



Designing an integration landscape with IBM InfoSphere Foundation Tools and Information Server, Part 1: Planning an integration landscape

02 Feb 2012 05:00:00 +0000

This tutorial is an introduction to the use of IBM InfoSphere Blueprint Director, in the context of a project, to depict the target vision (or landscape) for the final solution and to provide guidance for subsequent project tasks. It is the first of a series of tutorials focused on a specific, common information integration scenario: the update of a Data Warehouse-Business Intelligence (DW-BI) information process.



Managing and scheduling database jobs with the Data Studio Web Console

12 Jan 2012 05:00:00 +0000

As the number of databases increase in many organizations, many DBAs are facing a major challenge in automating and scheduling their database operations. The new job management capability in IBM Data Studio provide DBAs with a simple and flexible way to create and manage database jobs and to schedule command scripts to run automatically.



System Administration Certification exam 919 for Informix 11.70 prep, Part 3: System activity monitoring

28 Dec 2011 05:00:00 +0000

In this tutorial, you'll learn about IBM Informix(R) database tools, the utilities to monitor the database, and how to diagnose problems. Learn how to use the system-monitoring interface (SMI) and the SQL administration API. This tutorial prepares you for Part 3 of the System Administration Certification exam 919 for Informix v11.70.



Implement custom query transactions for IBM InfoSphere Master Data Management Server

17 Nov 2011 05:00:00 +0000

Learn how to extend IBM InfoSphere Master Data Management Server by implementing new query transactions using the MDM Server Workbench.



Integrating SPSS Model Scoring in InfoSphere Streams, Part 1: Calling Solution Publisher from an InfoSphere Streams operator

13 Oct 2011 04:00:00 +0000

This tutorial describes how to write and use an InfoSphere Streams operator to execute an IBM SPSS Modeler predictive model in an InfoSphere Streams application using the IBM SPSS Modeler Solution Publisher Runtime Library API.



Using the SQL integration service with WebSphere Lombardi Edition V7.2 and WebSphere Application Server V7

12 Oct 2011 04:00:00 +0000

This tutorial provides steps to help you create a connection with DB2 and manipulate the database by using the Java Naming and Directory Interface (JNDI) in WebSphere Application Server and using it in WebSphere Lombardi Edition V7.2. In Lombardi Edition, you learn how to create a human service to support interaction with end users. Moreover, you learn how to design data structure to represent business data and to control the work flow in a business process application.



Integrating SPSS Model Scoring in InfoSphere Streams, Part 2: Using a generic operator

06 Oct 2011 04:00:00 +0000

Part 1 of this series describes how to write and use an InfoSphere Streams operator to execute an IBM SPSS Modeler predictive model in an InfoSphere Streams application using the IBM SPSS Modeler Solution Publisher Runtime library API. Part 2 takes the non-generic operator produced in Part 1 and extends it to be a generic operator capable of being used with any SPSS Modeler stream without any custom C++ coding needed.



Solving problems in the DB2 pureScale cluster services environment

18 Aug 2011 04:00:00 +0000

This tutorial guides DBAs and system administrators in problem determination for DB2 pureScale cluster services. As you deploy IBM DB2 pureScale Feature for DB2 Enterprise Server Edition systems into production, you need to acquire appropriate problem determination skills. This tutorial provides information about gathering diagnostic information when failures occur, and provides additional information to aid in understanding the tightly integrated subcomponents of the DB2 pureScale Feature, such as the Cluster Caching Facility (CF), General Parallel File System (GPFS), Reliable Scalable Cluster Technology (RSCT), and IBM Tivoli Systems Automation for Multiplatforms (Tivoli SA MP).



Integrate the rich Internet application framework ZK with Informix to build real-world applications

18 Aug 2011 04:00:00 +0000

This tutorial presents a real-world example that integrates IBM Informix and ZK, a rich Internet application (RIA) framework. Informix is a flagship IBM RDBMS product, while ZK is a Java-based web application framework supporting Ajax applications. This event-driven framework enables creation of rich user interfaces with minimal knowledge and use of JavaScript. ZK's unique server-centric approach enables synchronization of components and events across the client and server via the core engine.



Creating a custom business activity monitoring user interface using DB2 Alphablox Query Builder

17 Aug 2011 04:00:00 +0000

In this tutorial, you'll learn about the powerful features of the DB2 Alphablox Query Builder, included with WebSphere Business Monitor, to build a custom user interface for monitoring solutions. This approach is useful when you don't want to use Business Space, but need an interface that's easy to customize to your desired look and feel, or when you want to embed the monitoring application into an existing UI.



Standardize your data using InfoSphere QualityStage

11 Aug 2011 04:00:00 +0000

Data standardization is a process that ensures that data conforms to quality rules. This tutorial introduces data standardization concepts and demonstrates how you can achieve standardized data using IBM InfoSphere QualityStage. A reader who is new to QualityStage standardization will get a basic understanding of the process. Readers should have basic knowledge of InfoSphere DataStage job development. This tutorial covers standardization using country identifier, domain pre-processor, domain-specific and validation types of rule sets.



Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 3: Migrate SPADE user-defined function applications

18 Jul 2011 04:00:00 +0000

The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 3 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 3 demonstrates the migration of SPADE user-defined function applications.



Recommended practices for using Cognos with Informix, Part 2: Deploy Informix with IBM Cognos BI Server 10

07 Jul 2011 04:00:00 +0000

Connecting your Informix databases to IBM Cognos Business Intelligence software gives you a way to unleash the power of your data with expanded query, reporting, and analysis capabilities. If you're ready to take that step, this two-part tutorial series gives you the information you need to install, configure, and deploy the necessary components to achieve the best results. Part 1 showed how to get started with using IBM Cognos Express V9 together with IBM Informix V11.5 as a content store and data source. In Part 2, you'll get the same level of detail for deploying Informix with IBM Cognos BI Server V10. The tutorials include recommended practices for each step along the way, based on lessons learned from real-world deployments on the Windows operating system.



Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 5: Migrate SPADE user-defined built-in operator (UBOP) applications

16 Jun 2011 04:00:00 +0000

The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 5 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 5 demonstrates the migration of SPADE user-defined built-in operator (UBOP) applications.



Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 4: Migrate SPADE user-defined operator (UDOP) applications

09 Jun 2011 04:00:00 +0000

The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 4 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 4 demonstrates the migration of SPADE user-defined operator (UDOP) applications.



Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 2: Migrate SPADE mixed-mode applications

26 May 2011 04:00:00 +0000

The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 2 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 2 demonstrates the migration of SPADE mixed-mode applications.



Managing pureQuery-enabled applications efficiently, Part 3: Automate client optimization with WebSphere applications

19 May 2011 04:00:00 +0000

In a customer environment, applications often interact with transactional databases from within an application server. pureQuery client optimization can provide useful diagnostic information as well as increase performance for your web application. In this tutorial, you will learn how to automate the pureQuery client optimization process with Apache Ant script technologies.



Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 1: Migrate basic SPADE applications

19 May 2011 04:00:00 +0000

The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 1 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 1 demonstrates the migration of basic SPADE applications.



The Informix Detective Game

14 Apr 2011 04:00:00 +0000

Here's a fun way to learn about IBM Informix! Learn or teach the basics of Informix and relational databases with an interactive game called the Informix Detective Game (the game's theme is a crime investigation). The game teaches relational database concepts and shows how technology can be applied to solving real-life problems. The Informix Detective Game is based on the DB2 Detective Game created by Joanna Kubasta and Joanne Moore.



Take a beginner's tour of the Informix virtual table interface with shared libraries

17 Mar 2011 04:00:00 +0000

IBM Informix (R) provides access to external data sources through the virtual table interface (VTI). The VTI provides a set of hooks called purpose functions. As a developer, your task is to create an access method that implements VTI purpose functions and as many additional user-defined routines (UDRs) as necessary to access your external data source. This tutorial shows you how to compile and run your VTI UDR as a shared library.



Migrate a database from MySQL to IBM Informix Innovator-C Edition, Part 2: Step-by-step walk-through of the migration process

17 Feb 2011 05:00:00 +0000

Walk through a migration from MySQL to Informix, step by step. The tutorial provides a conversion methodology and discusses the processes for migrating both database objects and data. It includes a discussion of SQL differences and shows how to migrate tables, views, stored procedures, functions, triggers, and more.



Managing pureQuery-enabled applications efficiently, Part 1: Set up an SQL management repository using an Ant script

27 Jan 2011 05:00:00 +0000

IBM Optim Development Studio and the pureQuery Runtime include a command-line utility called ManageRepository that can be used to create, modify, export, import, and delete pureQuery metadata that is stored in the SQL management repository. Setting up an SQL management repository can be challenging using the ManageRepository utility command script. This tutorial shows you how to create and manage an SQL repository using an Ant script. You will also learn how to run the Ant script from within IBM Optim Development Studio.



Develop mapping models with IBM InfoSphere Data Architect

27 Jan 2011 05:00:00 +0000

Designing the mappings for an extract, transform, and load (ETL) process is a critical step in a data warehouse project. Mappings must be easy to modify, capable of version control, easily reported, and easily exported to other formats. This tutorial illustrates how to develop a complete source-to-target mapping model using InfoSphere(TM) Data Architect. You will also learn about the reporting functions that InfoSphere Data Architect provides.



Use the IBM Industry Model Information Insurance Warehouse to define smart and mature data models

23 Dec 2010 05:00:00 +0000

In this tutorial, understand the method for developing data models for data warehouse projects using the IBM Industry Model Insurance Information Warehouse (IIW), which is part of the IBM Industry Models product defined for the domain of insurance. The tutorial shows the best approach to develop core data warehouse (CDW) models and data mart (DM) models. The tutorial also introduces the recommended data warehousing development method (DWDM) to deal with the IIW model pattern framework to architect DWH solutions for insurance companies.



Monitor DB2 for Linux, UNIX, and Windows databases with Data Studio Health Monitor

16 Dec 2010 05:00:00 +0000

This tutorial introduces the Data Studio Health Monitor tool for DB2 for Linux, UNIX, and Windows databases. It walks you through the steps for monitoring the health of your databases, drilling down into the alert details, and changing the out-of-the-box default thresholds. It also describes the advanced health monitoring features that are available in the Optim Performance Manager 4.1.0.1 release, and the seamless integration in the Data Studio and Optim Database Administrator offerings.



Configuring a data source in WebSphere Lombardi Edition V7.1

13 Oct 2010 04:00:00 +0000

WebSphere Lombardi Edition V7.1 provides connectivity to the database specified during installation. This tutorial shows you how to connect to an additional database by creating a data source in WebSphere Application Server, and then using it in the Lombardi Edition Authoring Environment.



Using DB2 High Availability Disaster Recovery with Tivoli Systems Automation and Reliable Scalable Cluster Technology

30 Sep 2010 04:00:00 +0000

The DB2 High Availability (HA) feature, introduced in DB2 9.5, enables a new level of integration between the data server and cluster management software, providing a unified High Availability Disaster Recovery (HADR) automation framework. In this tutorial, get an introduction to this integrated solution, and learn about useful diagnostic tools for working with DB2 and Tivoli Systems Automation, a key piece of the solution. Achieve the highest possible level of performance and reliability for your data, understanding how to solve problems and address issues.



Using the LDAP wrapper with InfoSphere Federation Server

23 Sep 2010 04:00:00 +0000

The LDAP wrapper is a pure Java package that is based on InfoSphere Federation Server Java wrapper SDK technology. By providing read-only access to LDAP directory servers in an SQL environment, the LDAP wrapper facilitates the integration and connectivity between business data in a relational database and human resource data in the LDAP directory server.



Use CSV and XML import methods to populate, update, and enhance your InfoSphere Business Glossary content

16 Sep 2010 04:00:00 +0000

IBM InfoSphere Business Glossary enables you to create, manage, and share an enterprise vocabulary and classification system. In Version 8.1.1, the InfoSphere Business Glossary introduced some new CSV and XML import and export methods to populate a business glossary with data. This tutorial provides technical instructions, tips, and examples to help you implement these new features to efficiently create a business glossary.



High-performance solution to feeding a data warehouse with real-time data, Part 2: Explore the integration options with staging tables and WebSphere MQ messages

02 Sep 2010 04:00:00 +0000

Feeding a data warehouse with changes from the source database can be very expensive. If the extraction is only done with SQL, there is no way to easily identify the rows that have been changed. IBM InfoSphere(TM) Replication Server can detect changed data by reading only the database log. This series shows how to use InfoSphere Replication Server to efficiently extract only the changed data and how to pass the changes to IBM InfoSphere DataStage(R) to feed the data warehouse. Part 1 of the 2-part series provided an overview of these products and how they can work together. In this Part 2, explore two integration options: using WebSphere(R) MQ messages with InfoSphere Event Publisher and using staging tables.



Integrate enterprise metadata with IBM InfoSphere and Cognos

29 Jul 2010 04:00:00 +0000

Knowledge about the quality and correctness of the huge volumes of data that drive day-to-day activities for enterprises and organizations is essential for effective decision making. Use this tutorial to learn how to gain visibility into your metadata, which in turn will lead to increased trust in data reliability, increased agility, and improved common understanding throughout your enterprise. This tutorial describes the significance of business and technical metadata integration and shows how heterogeneous metadata in an enterprise can be integrated using various IBM products. After a brief overview of the business issues and the integration solution, the tutorial provides a step-by-step guide showing you how to integrate metadata using tools from the IBM InfoSphere and Cognos product suites.



Recommended practices for using Cognos with Informix, Part 1: Deploy Informix with IBM Cognos Express 9

30 Jun 2010 04:00:00 +0000

Connecting your Informix databases to IBM Cognos Business Intelligence software gives you a way to unleash the power of your data with expanded query, reporting, and analysis capabilities. If you're ready to take that step, this two-part tutorial series gives you the information you need to install, configure, and deploy the necessary components to achieve the best results. Part 1 gets you started with using IBM Cognos Express V9 together with IBM Informix V11.5 as a content store and data source. In Part 2, you'll get the same level of detail for deploying Informix with IBM Cognos BI Server V10. The tutorials include recommended practices for each step along the way, based on lessons learned from real-world deployments on the Windows operating system.



Automate DB2 9.7 database maintenance in an embedded database environment

06 May 2010 04:00:00 +0000

Within an embedded database environment, it is important that you, as a database administrator, automate as many maintenance tasks as possible so that you can run the database with minimal intervention. IBM DB2 for Linux, UNIX, and Windows provides advanced automation features for configuring, tuning, and managing databases. These automation features allow you to spend less time managing routine tasks, and more time focusing on strategic issues that help you business gain and maintain a competitive advantage. This tutorial shows you how to automate routine maintenance tasks for DB2 on Linux or UNIX.



Using Optim with Informix Dynamic Server, Part 2: Scenarios for using Optim with IDS

29 Apr 2010 04:00:00 +0000

Part 1 of this tutorial series showed how to configure IBM Informix Dynamic Server with Optim. In this tutorial, walk through some scenarios to see how using Optim Data Privacy Solution with Informix can help you solve real-world problems.



Using Optim with Informix Dynamic Server, Part 1: Configure Informix Dynamic Server to work together with Optim Solutions

22 Apr 2010 04:00:00 +0000

IBM Informix Dynamic Server, with its copious feature set, meets or exceeds expectations for a high-performing, scalable, reliable, maintainable database server for enterprise applications. However, if you need to segregate older data and maintain it in such a way that it is easily accessible for reporting or strategic decision making, you may want to consider implementing it with IBM Optim Solutions. Optim lets you archive historical data from various database systems supporting an application and restore data from these archives in the production environment if needed, masking production data and making it available for reliability and application quality testing. In this tutorial, learn how to configure Informix Dynamic Server with Optim Solutions. In Part 2 of this series, you'll walk through some scenarios showing you how using Optim Data Privacy Solution with Informix can help you solve real-world problems.