RDB via Table Application Bundle (Aspire 2)

From wiki.searchtechnologies.com
Jump to: navigation, search

For Information on Aspire 3.1 Click Here

RDB via Table Application Bundle (Aspire 2)
AppBundle Name  RDBMS Connector
Maven Coordinates  com.searchtechnologies.aspire:app-rdbms-connector
Versions  2.2.2
Type Flags  scheduled
Inputs  AspireObject from a content source submitter holding all the information required for a crawl.
Outputs  An AspireObject containing the data extracted from the database to be processed.

The RDBMS Connector performs full and incremental scans of the rows in one or more tables of a database. Each row extracted from the database will be tagged with one of three possible actions - add, update or delete, so a different Aspire pipeline or job route can differentiate on the task to perform given this action. The exact nature of the extraction method is documented with the RDB via Table Scanner

The connector, once started, can be stopped, paused or resumed sending a new Scanner Configuration Job. Typically the start job will contain all information required by the job to perform the scan. When pausing or stopping, the connector will wait until all the jobs it published have completed before updating the statistics and status of the connector. Pausing a scan will only work in incremental mode - resuming is much like a new incremental feed apart from the fact that the pre update SQL is not run .


Configuration

This section lists all configuration parameters available to install the RDB via Table Application Bundle and to execute crawls using the connector.

General Application Configuration

Property Type Default Description
disableTextExtract boolean false By default, connectors use Apache Tika to extract text from downloaded documents. If you wish to apply special text processing to the downloaded document in the workflow, you should disable text extraction. The downloaded document is then available as a content stream.
workflowReloadPeriod int 15m The period after which to reload the business rules. Defaults to ms, but can be suffixed with ms, s, m, h or d to indicate the required units.
workflowErrorTolerant boolean false When set, exceptions in workflow rules will only effect the execution of the rule in which the exception occurs. Subsequent rules will be executed and the job will complete the workflow sucessfully. If not set, exceptions in workflow rules will be re-thrown and the job will be moved to the error workflow.
debug Boolean false Controls whether debugging is enabled for the application. Debug messages will be written to the log files.


Configuration Example

To install the application bundle, add the configuration, as follows, to the <autoStart> section of the Aspire settings.xml.

<application config="com.searchtechnologies.aspire:app-rdbms-connector">
  <properties>
    <property name="generalConfiguration">true</property>
    <property name="disableTextExtract">false</property>
    <property name="workflowReloadPeriod">15s</property>
    <property name="workflowErrorTolerant">false</property>
    <property name="debug">true</property>
  </properties>
</application>

Note: Any optional properties can be removed from the configuration to use the default value described on the table above.

Source Configuration

Scanner Control Configuration

The following table describes the list of attributes that the AspireObject of the incoming scanner job requires to correctly execute and control the flow of a scan process.

Element Type Options Description
@action string start, stop, pause, resume, abort Control command to tell the scanner which operation to perform. Use start option to launch a new crawl.
@actionProperties string full, incremental When a start @action is received, it will tell the scanner to either run a full or an incremental crawl.
@normalizedCSName string Unique identifier name for the content source that will be crawled.
displayName string Display or friendly name for the content source that will be crawled.

Header Example

  <doc action="start" actionProperties="full" actionType="manual" crawlId="0" dbId="0" jobNumber="0" normalizedCSName="FeedOne_Connector"
   scheduleId="0" scheduler="##AspireSystemScheduler##" sourceName="ContentSourceName">
    ...
    <displayName>testSource</displayName>
    ...
  </doc>

All configuration properties described in this section are relative to /doc/connectorSource of the AspireObject of the incoming Job.

Property Type Default Description
url string The JDBC URL for your RDBMS server and database. For example, "jdbc:mysql://192.168.40.27/mydb" (MySQL). This will vary depending on the type of RDBMS.
username string The name of a database user with read-only access to all of the tables which need to be indexed, and write access to the necessary update tables (if update management is handled through the RDB).
password string The database password
dbDriverJar string Path to the JDBC driver JAR file for your RDBMS. Typically this is placed in the "lib" directory inside your Aspire Home, for example "lib/myjdbcdriver.jar".
dbDriverClass string (Optional)The name of the JDBC driver class if the class name from the META-INF/services/java.sql.Driver file in the driver JAR file should not be used, or that file does not exist in the driver JAR file.
fullSelectSQL string This SQL will be executed when the user clicks on "full crawl". Each record produced by this statement will be indexed as a separate document. Some field names have special meaning (such as 'title', 'content', 'url', 'aspire_id', etc.)
idColumn string id The name of the column which contains unique identifiers for each row. This is used for both full and incremental crawls and must match the name returned by the SQL. If the column is aliased using the SQL "AS" construct, you should provide the alias name here.
useSlices boolean false True if you want to divide the Full crawl SQL into multiple slices. Only works when the id column is an integer.
slicesNum integer 10 The number of SQL slices to split Full crawl SQL. Only works when idColumn is an integer.
preUpdateSQL string SQL to run before an incremental crawl. This SQL can be used to mark documents for update, save timestamps, clear update tables, etc. as needed to prepare for an incremental crawl. Can be left blank if you never do an incremental crawl.
updateSQL string SQL to run for an incremental crawl. This SQL should provide a list of all adds and deletes to the documents in the index. Some field names have special meaning (such as 'title', 'content', 'url', 'aspire_id', etc.) see the wiki for more information. Note the special column, 'aspire_action' should report 'I' (for inserts), 'U' (for updates, typically the same as updates for most search engines), and 'D' (for deletes).
postUpdateSQL string SQL to run after each record processed. This SQL can be used un-mark / delete each document from the tables after it is complete.
postUpdateFailedSQL string SQL to run after each record if processing fails. If this SQL is left blank, the 'Post incremental crawl SQL' will be run instead
seqColumn string seq The name of the column in the returned data which holds the sequence number of the update.
actionColumn string action The name of the column in the returned data which holds action of the update (ie Insert, Update or Delete).

Scanner Configuration Example

<doc action="start" actionProperties="full" normalizedCSName="My_RDB_Source">
  <connectorSource>
    <url>jdbc:postgresql://localhost:5433/RDBMS_Test_DB</url>
    <username>aspire_user</username>
    <password>encrypted:062062C25293A7A3BA08D29385F6C05A</password>
    <dbDriverJar>lib\postgresql-9.3-1100.jdbc41.jar</dbDriverJar>
    <fullSelectSQL>SELECT "id", "name", "company", "email",  "birthday" FROM "Test_table" WHERE "id" <= 1000 AND {SLICES}</fullSelectSQL>
    <idColumn>id</idColumn>
    <useSlices>true</useSlices>
    <slicesNum>10</slicesNum>
    <incrementalIndex>true</incrementalIndex>
    <preUpdateSQL>UPDATE update_table SET status='I' WHERE action='I'</preUpdateSQL>
    <updateSQL>SELECT updates_table.seq_id AS seq_id, updates_table.id AS id, updates_table.action AS update_action, main_data.col1 AS cola, main_data.col2 AS colb, main_data.col3 AS coly, main_data.col4 AS colz FROM main_data RIGHT OUTER JOIN updates_table ON main_data.id = updates_table.id WHERE updates_table.ACTION = 'I' ORDER BY updates_table.seq_id ASC</updateSQL>
    <postUpdateSQL>UPDATE update_table SET status='A' WHERE status='I'</postUpdateSQL>
    <postUpdateFailedSQL/>
    <seqColumn>seq_id</seqColumn>
    <actionColumn>action</actionColumn>
  </connectorSource>
  <displayName>My RDB Source</displayName>
</doc>

Note: To launch a crawl, the job should be sent (processed/enqueued) to the "/RDBConnector/Main" pipeline.


Output

<doc>
  <docType>item</docType>
  <url>10</url>
  <id>10</id>
  <fetchUrl>10</fetchUrl>
  <displayUrl>10</displayUrl>
  <snapshotUrl>001 10</snapshotUrl>
  <sourceType>rdb</sourceType>
  <sourceName>My_RDB_Source</sourceName>
  <id>10</id>
  <connectorSpecific type="rdb">
    <field name="name">Meghan1</field>
    <field name="company">Interdum Enim Non LLC</field>
    <field name="email">enim.gravida@diam.net</field>
    <field name="birthday">10/21/13</field>
  </connectorSpecific>
  <connectorSource>
    <url>jdbc:postgresql://localhost:5433/RDBMS_Test_DB</url>
    <username>aspire_user</username>
    <password>encrypted:062062C25293A7A3BA08D29385F6C05A</password>
    <dbDriverJar>lib\postgresql-9.3-1100.jdbc41.jar</dbDriverJar>
    <fullSelectSQL>SELECT "id", "name", "company", "email",  "birthday" FROM "Test_table" WHERE "id" <= 1000 AND {SLICES}</fullSelectSQL>
    <idColumn>id</idColumn>
    <useSlices>true</useSlices>
    <slicesNum>10</slicesNum>
    <incrementalIndex>true</incrementalIndex>
    <preUpdateSQL>UPDATE update_table SET status='I' WHERE action='I'</preUpdateSQL>
    <updateSQL>SELECT updates_table.seq_id AS seq_id, updates_table.id AS id, updates_table.action AS update_action, main_data.col1 AS cola, main_data.col2 AS colb, main_data.col3 AS coly, main_data.col4 AS colz FROM main_data RIGHT OUTER JOIN updates_table ON main_data.id = updates_table.id WHERE updates_table.ACTION = 'I' ORDER BY updates_table.seq_id ASC</updateSQL>
    <postUpdateSQL>post incremental</postUpdateSQL>
    <postUpdateFailedSQL>UPDATE update_table SET status='A' WHERE status='I'</postUpdateFailedSQL>
    <seqColumn>seq_id</seqColumn>
    <actionColumn>action</actionColumn>
    <displayName>My RDB Source</displayName>
  </connectorSource>
  <action>add</action>
  <hierarchy>
    <item id="D3D9446802A44259755D38E6D163E820" level="1" name="My_RDB_Source" url="10"/>
  </hierarchy>
</doc>