Feed aggregator

Laura Ashley Customizes Online Retail Experiences Globally with Oracle Commerce Cloud

Oracle Press Releases - Fri, 2019-01-11 07:00
Press Release
Laura Ashley Customizes Online Retail Experiences Globally with Oracle Commerce Cloud Oracle helps international retailer seamlessly deliver custom products online to customers across the world

Redwood Shores, Calif.—Jan 11, 2019

Laura Ashley, an international lifestyle retailer, has selected Oracle Commerce Cloud to support its growing global online business. With Oracle Commerce Cloud, Laura Ashley is able to create seamless, integrated customer experiences across traditional and digital channels and enhance inventory management to meet growing demand for custom, one-of-a-kind goods online.

Laura Ashley sells custom furniture, home accessories, decorating and fashion products in stores throughout the UK, Ireland and France and franchisees in 29 territories globally. To further extend its growing online business, Laura Ashley began selling custom products online in 2001. With over one million combinations of made-to-order products at any given time, Laura Ashley needed more control over its online store to seamlessly deliver custom products to customers across the world. To manage this complexity and ensure it could deliver the best possible customer experience, Laura Ashley selected Oracle Commerce Cloud.

"Customers can see our products and customize them in store, but a growing number want the option to buy customized items online as well and this was adding significant complexity to our business,” said Colin Rice, chief information officer, Laura Ashley. “With Oracle Commerce Cloud, we have been able to streamline business processes and capitalize on the opportunity to sell custom goods online globally. Oracle Commerce Cloud was the only platform that could accommodate our needs and still give us room to grow.”

With Oracle Commerce Cloud, Laura Ashley has been able to redesign and internationalize its website so each territory can customize its online storefront. By enabling Laura Ashley to seamlessly integrate its inventory system and commerce platform, Oracle Commerce Cloud has provided a flexible platform that allows the company to control its front end, deliver personalized experiences at scale and build on its rapidly expanding online business, which has grown 23% percent in the last three years.

“In our 2018 consumer research study The New Topography of Retail, we discovered that 62 percent of European consumers declared a fast and responsive online experience as a top priority for their brand experience. Laura Ashley was looking for a commerce platform that gave its team the control and creativity to update the look and feel of its online store so it could continue to delight consumers with a seamless shopping experience,” said Dusan Rnic, regional vice president, Oracle Retail. “With the help of Oracle Commerce Cloud, Laura Ashley now owns its front-end experience and has been able to reduce management costs and the time it takes to deploy changes.”

Oracle Commerce Cloud is part of Oracle Customer Experience (CX) Cloud, which empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences and outcomes, Oracle CX Cloud Suite helps customers reduce IT complexity, deliver innovative customer experiences and achieve predictable and tangible business results.

Be sure to visit Oracle’s booth #2321 at the NRF Big Show, January 13-15, 2019. Learn more about Oracle Retail here.

Contact Info
Kimberly Guillon
Oracle
209.601.9152
kim.guillon@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates.

Talk to a Press Contact

Kimberly Guillon

  • 209.601.9152

[Blog] [Solved] java.lang.OutOfMemoryError: GC overhead limit exceeded

Online Apps DBA - Fri, 2019-01-11 04:27

Have you ever encountered with “java.lang.OutOfMemoryError: GC overhead limit exceeded” Error, while applying Application tier weblogic patches as part of Oracle E-Business Suite 12.2.8 patching? Want to know how to troubleshoot it with complete solution. If Yes, Visit: https://k21academy.com/appsdba44 and find yourself at our omniscient Blog Covering: ✔ Issue while applying Application tier Weblogic Server […]

The post [Blog] [Solved] java.lang.OutOfMemoryError: GC overhead limit exceeded appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Compile additional packages for Oracle VM Server

Yann Neuhaus - Fri, 2019-01-11 03:13

I needed a special package on my OVM Server 3.4.6.
The package is called fio and is needed to do some I/O performance tests.
Unfortunately, OVM Server does not provide any package for compiling software and installing additional software to your OVM Server is also not supported.
But there is a solution:

Insatll a VM with Oracle VM Server 3.4.6 and added the official OVM SDK repositories:


rm -f /etc/yum.repos.d/*
echo '
[ovm34] name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleVM/OVM3/34_latest/x86_64/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
[ol6_latest] name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
[ol6_addons] name=Oracle Linux $releasever Add ons ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/addons/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
[ol6_UEKR4] name=Latest Unbreakable Enterprise Kernel Release 4 for Oracle Linux $releasever ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/UEKR4/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1 ' > /etc/yum.repos.d/ovm-sdk.repo

Now install the necessary packages and compile your software:

On OVM 3.4 SDK VM
yum install -y gcc make zlib-devel libaio libaio-devel
wget https://codeload.github.com/axboe/fio/zip/master
unzip master
cd fio-master
./configure
make

Copy the compiled executable “fio” to your OVM Server or to an attached NFS share.
Run the program and do what you wanna do.

In my case I will run several different performance tests, but that is a story for an other blog post.

Reference: Oracle VM 3: How-to build an Oracle VM 3.3/3.4 SDK platform (Doc ID 2160955.1)

Cet article Compile additional packages for Oracle VM Server est apparu en premier sur Blog dbi services.

Step by Step Installation of PostgreSQL 11 on Oracle Linux 7

Pakistan's First Oracle Blog - Fri, 2019-01-11 02:30
This is quick spit-out of commands, I used on my test virtual machine to install PostgreSQL 11 on Oracle Enterprise Linux 7.



[root@docker ~]# curl -O https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-oraclelinux11-11-2.noarch.rpm
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4952  100  4952    0     0   1889      0  0:00:02  0:00:02 --:--:--  1890
[root@docker ~]# rpm -ivh pgdg-oraclelinux11-11-2.noarch.rpm
warning: pgdg-oraclelinux11-11-2.noarch.rpm: Header V4 DSA/SHA1 Signature, key ID 442df0f8: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:pgdg-oraclelinux11-11-2          ################################# [100%]
[root@docker ~]# yum list postgres*
Loaded plugins: langpacks, ulninfo
pgdg11                                                   | 4.1 kB     00:00   
(1/2): pgdg11/7Server/x86_64/primary_db                    | 141 kB   00:02   
(2/2): pgdg11/7Server/x86_64/group_gz                      |  245 B   00:02   
Available Packages
postgresql.i686                     9.2.24-1.el7_5           ol7_latest       
postgresql.x86_64                   9.2.24-1.el7_5           ol7_latest       
postgresql-contrib.x86_64           9.2.24-1.el7_5           ol7_latest       
postgresql-devel.i686               9.2.24-1.el7_5           ol7_latest       
postgresql-devel.x86_64             9.2.24-1.el7_5           ol7_latest       
postgresql-docs.x86_64              9.2.24-1.el7_5           ol7_latest       
postgresql-jdbc.noarch              42.2.5-1.rhel7.1         pgdg11           
postgresql-jdbc-javadoc.noarch      42.2.5-1.rhel7.1         pgdg11           
postgresql-libs.i686                9.2.24-1.el7_5           ol7_latest       
postgresql-libs.x86_64              9.2.24-1.el7_5           ol7_latest       
postgresql-odbc.x86_64              09.03.0100-2.el7         ol7_latest       
postgresql-plperl.x86_64            9.2.24-1.el7_5           ol7_latest       
postgresql-plpython.x86_64          9.2.24-1.el7_5           ol7_latest       
postgresql-pltcl.x86_64             9.2.24-1.el7_5           ol7_latest       
postgresql-server.x86_64            9.2.24-1.el7_5           ol7_latest       
postgresql-static.i686              9.2.24-1.el7_5           ol7_latest       
postgresql-static.x86_64            9.2.24-1.el7_5           ol7_latest       
postgresql-test.x86_64              9.2.24-1.el7_5           ol7_latest       
postgresql-unit11.x86_64            7.0-1.rhel7              pgdg11           
postgresql-unit11-debuginfo.x86_64  7.0-1.rhel7              pgdg11           
postgresql-upgrade.x86_64           9.2.24-1.el7_5           ol7_optional_latest
postgresql11.x86_64                 11.1-1PGDG.rhel7         pgdg11           
postgresql11-contrib.x86_64         11.1-1PGDG.rhel7         pgdg11           
postgresql11-debuginfo.x86_64       11.1-1PGDG.rhel7         pgdg11           
postgresql11-devel.x86_64           11.1-1PGDG.rhel7         pgdg11           
postgresql11-docs.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql11-libs.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql11-llvmjit.x86_64         11.1-1PGDG.rhel7         pgdg11           
postgresql11-odbc.x86_64            11.00.0000-1PGDG.rhel7   pgdg11           
postgresql11-plperl.x86_64          11.1-1PGDG.rhel7         pgdg11           
postgresql11-plpython.x86_64        11.1-1PGDG.rhel7         pgdg11           
postgresql11-pltcl.x86_64           11.1-1PGDG.rhel7         pgdg11           
postgresql11-server.x86_64          11.1-1PGDG.rhel7         pgdg11           
postgresql11-tcl.x86_64             2.4.0-2.rhel7.1          pgdg11           
postgresql11-test.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql_anonymizer11.noarch      0.2.1-1.rhel7            pgdg11           
[root@docker ~]#
[root@docker ~]#
[root@docker ~]# yum list postgres* | grep
anaconda-ks.cfg                     .bashrc                             .dbus/                              .pki/
.bash_history                       .cache/                             initial-setup-ks.cfg                .tcshrc
.bash_logout                        .config/                            .local/                             .viminfo
.bash_profile                       .cshrc                              pgdg-oraclelinux11-11-2.noarch.rpm  .xauth5hcDeF
[root@docker ~]# yum list postgres* | grep 11
postgresql-jdbc.noarch              42.2.5-1.rhel7.1         pgdg11           
postgresql-jdbc-javadoc.noarch      42.2.5-1.rhel7.1         pgdg11           
postgresql-unit11.x86_64            7.0-1.rhel7              pgdg11           
postgresql-unit11-debuginfo.x86_64  7.0-1.rhel7              pgdg11           
postgresql11.x86_64                 11.1-1PGDG.rhel7         pgdg11           
postgresql11-contrib.x86_64         11.1-1PGDG.rhel7         pgdg11           
postgresql11-debuginfo.x86_64       11.1-1PGDG.rhel7         pgdg11           
postgresql11-devel.x86_64           11.1-1PGDG.rhel7         pgdg11           
postgresql11-docs.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql11-libs.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql11-llvmjit.x86_64         11.1-1PGDG.rhel7         pgdg11           
postgresql11-odbc.x86_64            11.00.0000-1PGDG.rhel7   pgdg11           
postgresql11-plperl.x86_64          11.1-1PGDG.rhel7         pgdg11           
postgresql11-plpython.x86_64        11.1-1PGDG.rhel7         pgdg11           
postgresql11-pltcl.x86_64           11.1-1PGDG.rhel7         pgdg11           
postgresql11-server.x86_64          11.1-1PGDG.rhel7         pgdg11           
postgresql11-tcl.x86_64             2.4.0-2.rhel7.1          pgdg11           
postgresql11-test.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql_anonymizer11.noarch      0.2.1-1.rhel7            pgdg11           
[root@docker ~]#
[root@docker ~]#
[root@docker ~]# yum install postgresql11-server
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package postgresql11-server.x86_64 0:11.1-1PGDG.rhel7 will be installed
--> Processing Dependency: postgresql11-libs(x86-64) = 11.1-1PGDG.rhel7 for package: postgresql11-server-11.1-1PGDG.rhel7.x86_64
--> Processing Dependency: postgresql11(x86-64) = 11.1-1PGDG.rhel7 for package: postgresql11-server-11.1-1PGDG.rhel7.x86_64
--> Processing Dependency: libpq.so.5()(64bit) for package: postgresql11-server-11.1-1PGDG.rhel7.x86_64
--> Running transaction check
---> Package postgresql11.x86_64 0:11.1-1PGDG.rhel7 will be installed
---> Package postgresql11-libs.x86_64 0:11.1-1PGDG.rhel7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================================
 Package                                   Arch                         Version                                  Repository                    Size
====================================================================================================================================================
Installing:
 postgresql11-server                       x86_64                       11.1-1PGDG.rhel7                         pgdg11                       4.7 M
Installing for dependencies:
 postgresql11                              x86_64                       11.1-1PGDG.rhel7                         pgdg11                       1.6 M
 postgresql11-libs                         x86_64                       11.1-1PGDG.rhel7                         pgdg11                       359 k

Transaction Summary
====================================================================================================================================================
Install  1 Package (+2 Dependent packages)

Total download size: 6.7 M
Installed size: 29 M
Is this ok [y/d/N]: y
Downloading packages:
(1/3): postgresql11-libs-11.1-1PGDG.rhel7.x86_64.rpm                                                                         | 359 kB  00:00:04   
(2/3): postgresql11-11.1-1PGDG.rhel7.x86_64.rpm                                                                              | 1.6 MB  00:00:04   
(3/3): postgresql11-server-11.1-1PGDG.rhel7.x86_64.rpm                                                                       | 4.7 MB  00:00:02   
----------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                               974 kB/s | 6.7 MB  00:00:07   
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : postgresql11-libs-11.1-1PGDG.rhel7.x86_64                                                                                        1/3
  Installing : postgresql11-11.1-1PGDG.rhel7.x86_64                                                                                             2/3
  Installing : postgresql11-server-11.1-1PGDG.rhel7.x86_64                                                                                      3/3
  Verifying  : postgresql11-libs-11.1-1PGDG.rhel7.x86_64                                                                                        1/3
  Verifying  : postgresql11-11.1-1PGDG.rhel7.x86_64                                                                                             2/3
  Verifying  : postgresql11-server-11.1-1PGDG.rhel7.x86_64                                                                                      3/3

Installed:
  postgresql11-server.x86_64 0:11.1-1PGDG.rhel7                                                                                                   

Dependency Installed:
  postgresql11.x86_64 0:11.1-1PGDG.rhel7                                 postgresql11-libs.x86_64 0:11.1-1PGDG.rhel7                               

Complete!
[root@docker ~]#

Categories: DBA Blogs

PostgreSQL 12, pg_stat_statements_reset for userid, queryid and dbid

Yann Neuhaus - Fri, 2019-01-11 00:29

PostgreSQL 12 will give you more control on resetting statistics gathered by pg_stat_statements. When you check the documentation for PostgreSQL 11 (as linked in the previous sentence) you will see that the function has the following signature:

pg_stat_statements_reset() returns void

This means your only choice is to reset all the statistics. Today this commit landed and this will give you more control on which statistics to reset. The signature of the function now looks like this:

pg_stat_statements_reset(userid Oid, dbid Oid, queryid bigint) returns void

There are three new parameters for controlling what to reset: The user id, the database id and the id of a specific query. By default all of them are 0 meaning the the function will behave as in previous versions: Discarding all the statistics. Lets create two users, two databases and a table in each so we will have something in pg_stat_statements we can work with:

postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create user u1 with login password 'u1'" postgres
CREATE ROLE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create user u2 with login password 'u2'" postgres
CREATE ROLE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create database db1 with owner = u1" postgres
CREATE DATABASE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create database db2 with owner = u2" postgres
CREATE DATABASE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create table t1 (a int)" -U u1 db1
CREATE TABLE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "create table t1 (a int)" -U u2 db2
CREATE TABLE
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "insert into t1 select * from generate_series(1,100)" -U u1 db1
INSERT 0 100
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "insert into t1 select * from generate_series(1,100)" -U u2 db2
INSERT 0 100
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "select count(*) from t1" -U u1 db1
 count 
-------
   100
(1 row)
postgres@pgbox:/u02/pgdata/DEV/ [PGDEV] psql -c "select count(*) from t1" -U u2 db2
 count 
-------
   100
(1 row)

We should be able to see the statements in pg_stat_statements but before doing that lets check the dbids:

postgres@pgbox:/home/postgres/ [PGDEV] oid2name 
All databases:
    Oid  Database Name  Tablespace
----------------------------------
  16394            db1  pg_default
  16395            db2  pg_default
  13569       postgres  pg_default
  13568      template0  pg_default
      1      template1  pg_default

What do we see for our two databases?

postgres=# select userid,dbid,queryid,calls,query from pg_stat_statements where dbid in (16394,16395);
 userid | dbid  |       queryid        | calls |                        query                        
--------+-------+----------------------+-------+-----------------------------------------------------
  16392 | 16394 |  7490503619681577402 |     3 | set client_encoding to 'unicode'
  16393 | 16395 |   843119317166481275 |     1 | insert into t1 select * from generate_series($1,$2)
  16392 | 16394 | -3672942776844552312 |     1 | insert into t1 select * from generate_series($1,$2)
  16393 | 16395 |  7490503619681577402 |     3 | set client_encoding to 'unicode'
  16392 | 16394 |  5583984467630386743 |     1 | select count(*) from t1
  16393 | 16395 |  4983979802666994390 |     1 | select count(*) from t1
  16393 | 16395 |  6842879890091936614 |     1 | create table t1 (a int)
  16392 | 16394 |  6842879890091936614 |     1 | create table t1 (a int)

We should be able to reset the statistics for a specific query:

postgres=# select userid,dbid,queryid,calls,query from pg_stat_statements where dbid in (16394,16395) and queryid = 6842879890091936614;
 userid | dbid  |       queryid       | calls |          query          
--------+-------+---------------------+-------+-------------------------
  16393 | 16395 | 6842879890091936614 |     1 | create table t1 (a int)
  16392 | 16394 | 6842879890091936614 |     1 | create table t1 (a int)
(2 rows)
postgres=# select pg_stat_statements_reset(0, 0, 6842879890091936614);
 pg_stat_statements_reset 
--------------------------
 
(1 row)

postgres=# select userid,dbid,queryid,calls,query from pg_stat_statements where dbid in (16394,16395) and queryid = 6842879890091936614;
 userid | dbid | queryid | calls | query 
--------+------+---------+-------+-------
(0 rows)

Notice that this of course resets the statistics for both statements as they have the same queryid. You could specify the userid and/or dbid as well to reset just one of them. Nice new feature.

Cet article PostgreSQL 12, pg_stat_statements_reset for userid, queryid and dbid est apparu en premier sur Blog dbi services.

Displaying an "Unsaved Changes" Warning in Visual Builder

Shay Shmeltzer - Thu, 2019-01-10 17:14

End-users are strange. Sometime they need the system we are developing to remind them when they do silly things. For example some users want us to remind them if they are trying to navigate away from a page where they did changes to data, but didn't click "save". Below I'll show you an approach for implementing such a behavior in Oracle Visual Builder Cloud Service.

What we are going to do is cancel navigation actions until users acknowledge that they are ok to leave the page.

I will leave it up to you to decide when this should go into effect. While some people might claim that this should be the default behavior across the app, this is debatable. For example, if I go into a "create record" type of page, and then without doing any changes decide to leave that page - should I be prompted for a "you have unsaved changes" message? Isn't me leaving the page the equivalent of saying - I don't need to do changes? As you can see the decision is not clear cut - so in the demo below I let you control when we start this "changes were made" status simply by pressing a button. In real applications these can be done for example when a value in a field changes. At the end of the day, all you need to do is set a boolean variable that tracks whether we are now in "changes were made" status.

In the demo I added a simple dialog to the shell page (the page that acts as the containing template to the rest of the pages) - this dialog has a "you have unsaved changes, are you sure you want to leave?" type of warning and two buttons "Yes" and "No". (Quick tip/reminder - you can see how to add a dialog to a page here, and don't forget to include the popup import in the page's json file).

I add an action chain to the shell page that will be invoked on the vbBeforeExit event - in there I check the value of the "changes made" variable and if changes were made - I show the dialog. Then I use a return action to return an object type variable that has a boolean variable called "cancelled" set to true.  Returning such an object tells the flow to stop the navigation.

Now all I needed to add were action chains to the buttons for "yes" and "no" to close the dialog, and for the "yes" scenario to also set the changes made boolean variable to no - so the next time we click to navigate away we don't show the dialog.

Check out the video to see the runtime behavior and the various parts that make up the solution.

Categories: Development

Four Types of Mindbreeze Relevancy Boostings and How to Use Them

Strong relevancy is critical to a well-adopted search solution. Mindbreeze provides several ways to fine tune relevancy which they refer to as boosting. This post will explore how boosting works and four ways to apply boosting rules within Mindbreeze.

About Boosting & Rank Scores

Before we start altering relevancy, it’s important to examine how boosting works within Mindbreeze. Mindbreeze provides a baseline algorithm with factors such as term frequency, term proximity, freshness, etc. (while there are ways to alter these core signals, we’ll save that topic for another time). Boostings address many common relevancy-adjustment use cases and are the easiest way to alter rankings. Boosting are applied to the baseline rankings by configured amount. Although the term “boost” generally implies an increase, boosting can used to increase or decrease rank scores relative to the baseline.

Mindbreeze boostings are factor-based (i.e. multiplicative). For example, a boost factor of 2.0 would make something twice as relevant as the baseline, while a boost factor of 0.5 would make it half as relevant. For this reason, it’s helpful to monitor rankings (called rank scores) before and after boosting, in order to determine an appropriate boosting factor. Mindbreeze provides two options for viewing ranks scores as described below.

Viewing Rank using the Export

The easiest way to see the rank of multiple search results is to use the export feature. From the default Mindbreeze search client, perform your search and select Export. In the Export results window, select the plus sign (+) and add “mes:hitinfo:rank” to the visible columns. You can simply start typing “rank” in the input box and this column name will appear.

Viewing Rank within JSON Results

If using the Export feature is not available, or you need to test rankings that are specific to a custom search application, you can also view the rank within the Mindbreeze JSON response for a search request. Follow these instructions to do so:

  1. Open the developer-tools dock in your browser (F12).
  2. Navigate to the Network tab.
  3. Perform your search.
  4. Expand the search request.
  5. View the response body and drill down into the search request response data to find the desired rank. For example, to see the rank of the first result, you would select result set > results > 0 > rank_score.

For more information on the data contained in the search response, see the Mindbreeze documentation on api.v2.search.

Term2document Boosting

Term2DocumentBoost is a Mindbreeze query transformation plugin which allows you to apply relevance tuning to specific search queries (or all queries) based on defined rules. It is the primary way to adjust relevancy within Mindbreeze. The plugin gets configured either for a specific index or globally. If you configure an index-specific boosting file, those rules will be applied instead of (not in addition to) the global rules. If you’d like to apply both set of the rules, the global rules should be copied into the index-specific boosting file as each index can only reference one file at a time.

Term2Document boosting rules will always be applied to searches against the indices for which they are configured, so their best used for static boosting as opposed to the more dynamic boosting options described in later sections. Term2Document boosting can be used to increase the relevance of certain documents for specific search queries. For example, a search for “news” can be tuned so that documents with the “post-type” metadata value “newsarticle” will have higher relevance. Term2Document boosting can also be used to generally increase the relevance of certain documents based on any combination of metadata-value pairs. For example, all documents from a “Featured Results” data source, or all documents from the “products” section of a website. Rules can use regular expressions to accommodate more complex patterns. In the second to last example below, we show how pages one-level off the root of a website can be boosted using a regular expression.

The Term2Document Boost file uses five columns to apply boosting rules.

  • Term: This is the search term you want to trigger the rule. Leave this blank to trigger the rule for all searches.
  • Key: The name of the metadata field on which content to be boosted will be identified. If you want to boost documents matching a pattern in the full text of the document contents, the “Key” column should contain the word “content”.
  • Pattern: A term or pattern that determines the metadata value on which content to be boosted will be identified. This column supports regular expressions. Please note, any fields you wish to boost in this way should be added to the Aggregated Metadata Keys configuration for the respective indices in order to enable regex matching.
  • Boost: The boost factor. Values less than one (e.g. 0.5) should be preceded by a zero (i.e. 0.5 not .5).
  • Query: Optional column for advanced configuration. Instead of specifying a Term, Key, and Pattern, you can use this column to create more flexible boosting rules via the Mindbreeze query language. This is helpful when you want to change the boosting rule for each user’s query. For example, if someone searches for a person (e.g. “John Doe”), documents with this person as the Author (i.e. stored in the Author metadata) can be boosted. This is shown in the last example below.
Term2document Boosting Examples

Term

Key

Pattern

Boost

Query

news

post-type

newsarticle

5

 

 

datasource/fqcategory

DataIntegration:FeaturedResults

100

 

 

key

.*\/products.*

1.5

 

 

key

^http:\/\/[^\/]*\/[^\/]*$

2.5

 

 

 

 

2.0

Author:{{query}}

Additional information can be found in the Mindbreeze documentation on the Term2DocumentBoost Transformer Plugin.

Category Descriptor Boosting

Mindbreeze uses an XML file called the CategoryDescriptor to control various aspects of the search experience for each data source category (e.g. Web, DataIntegration, Microsoft File, etc.). Each category plugin includes a default CategoryDescriptor which can be extended or modified to meet your needs.

You may modify the CategoryDescriptor if you wish to add localized display labels for metadata field names or alter the default metadata visible from the search results page. In this case, we’re focused on how you can use it to boost the overall impact of a metadata field on relevancy. This is common if you wish to change the impact of a term’s presence in certain field over others. Common candidates for up-boosting include title, keywords, or summary. Candidates for down-boosting may include ID numbers, GUIDs, or other values which could yield confusing or irrelevant results in certain cases.

The default CategoryDescriptor is located in the respective plugin’s zip file. You can extract this file and modify it as needed.

Category Descriptor Example

The example below shows the modification of two metadatum entries. The first, for Keywords, boosts the importance of this field by a factor of 5.0. The second, for Topic, adds a boost factor of 2.0 and localized display labels for four languages.

<?xml version=”1.0″ encoding=”UTF-8″?>

<category id=”MyExample” supportsPublic=”true”>

  <name>My Example Category</name>

  <metadata>

    <metadatum aggregatable=”true” boost=”5.0″ id=”Keywords” selectable=”true” visible=”true”>

    </metadatum>

    <metadatum aggregatable=”true” boost=”2.0″ id=”Topic” selectable=”true” visible=”true”>

<name xml:lang=”de”>Thema</name>

<name xml:lang=”en”>Topic</name>

<name xml:lang=”fr”>Sujet</name>

<name xml:lang=”es”>Tema</name>

    </metadatum>

    … additional metadatum omitted for brevity …

  </metadata>

</category>

Applying a Custom Category Descriptor

The easiest way to apply a custom category descriptor is to download a copy of the respective plugin from the Mindbreeze updates page. For example, if you wanted to change relevancy for crawled content, you would download Mindbreeze Web Connector.zip. Unzip the file and look for the categoryDescriptor.xml file which is located in Mindbreeze Web Connector\Web Connector\Plugin\WebConnector-18.1.4.203.zip (version number may vary).

Please note, if you update a plugin on the Mindbreeze appliance, your custom CategoryDescriptor will be overwritten. Keep a copy saved in case you need to reapply it after updating. Additional information can be found in the Mindbreeze documentation on Customizing the Category Descriptor.

Inline Boosting

Boosting can be set at query time using the Mindbreeze query language. This can be either done directly in the query box or as part of the search application’s design by applying it to the constraint property. This functionality can be leveraged to create contextual search by dynamically boosting results based on any number of factors and custom business logic.

For example:

  • ALL (company:fishbowl)^3.5 OR NOT (company:fishbowl)

Returns all results and ranks items with fishbowl in the company metadata field 3.5 times higher than other results.

  • (InSpire^2.0 OR “InSite“) AND “efficient“

Results must contain InSpire or InSite and efficient. Occurrences of InSpire are twice as relevant as other terms.

  • holiday AND ((DocType:Policy)^5 OR (DocType:Memo))

Returns results which contain holiday and a DocType value of Policy or Memo. Ranks items with Policy as their DocType 5 times higher than those with the DocType of Memo.

JavaScript Boosting

JavaScript boosting is similar to inline boosting in that it is also set at query time. It serves many of the same use cases as inline boosting, but can provide a cleaner implementation for clients already working within the Mindbreeze client.js framework. The examples below show how to apply boosting to three different scenarios. Please note, any fields you wish to boost in this way should be added to the Aggregated Metadata Keys configuration for the respective indices in order to enable regex matching.

Examples

This example ranks items with fishbowl in the company metadata field 3.5 times higher than other results. For comparison, this rule would have the same effect on rankings as the inline boosting shown in the first example within the previous section.

var application = new Application({});

application.execute(function (application) {

application.models.search.set(“boostings.fishbowl”, {

   factor: 3.5,

   query_expr: {

     label: “company”, regex: “^fishbowl$”

   }      

}, { silent: true });

});

This example ranks results with a department value of accounting 1.5 times higher than other results. This can be modified to dynamically set the department to a user’s given department. For example, accounting may wish to see accounting documents boosted whereas the engineering team would want engineering documents boosted. Please note, setting this dynamically requires access the user’s department data which is outside the scope of the Mindbreeze search API but is often accessible within the context of an intranet or other business application.

var application = new Application({});

application.execute(function (application) {

application.models.search.set(“boostings.accounting”, {

   factor: 1.5,

   query_expr: {

     label: “department”, regex: “^accounting$”

   }

}, { silent: true });

});

This example shows how dynamic, non-indexed information (in this case, the current time) can be used to alter relevancy.  Results with a featuredmeals value of dinner are boosted by a factor of 3 when the local time is between 5 PM and 10 PM. This could be extended to boost breakfast, brunch, and lunch, for their respective date-time windows which would be helpful if users were searching for restaurants or other meal-time points of interest.

var application = new Application({});

application.execute(function (application) {

      var now = new Date().getHours();

            if (now >= 17 && now <= 22) {

                  application.models.search.set(“boostings.dinner”, {

                  factor: 3.0,

                  query_expr: {

                  label: “featuredmeals”, regex: “.*dinner.*”

               }

            }, { silent: true });

            }

});

 

As you can see, Mindbreeze offers a variety of options for relevancy adjustments. If you have any questions about our experience working with Mindbreeze or would like to know more, please contact us.

The post Four Types of Mindbreeze Relevancy Boostings and How to Use Them appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Does LOB (un)compression affect existing rows?

Tom Kyte - Thu, 2019-01-10 13:46
Hi, I have a question regarding LOB compression. I have a lob securefile table with clob datatype. I want to compress this table . I read in articles that if I compress the lob with (low/medium/high) options only the new records will compress and...
Categories: DBA Blogs

Issues in OCI calls from C++ exe in Solaris platform - Oracle 12c Upgrade

Tom Kyte - Thu, 2019-01-10 13:46
Problem Description --------------------------------------------------- C++ exe?s not working in Sun Solaris 5.11 11.3 server after upgraded to oci12. Failed with ?GPF (11) Segment Violation(SIGSEGV)? after 10 successful OCI call attempts to i...
Categories: DBA Blogs

Import db dump to production existing schema

Tom Kyte - Thu, 2019-01-10 13:46
Hi Experts, Is it possible to import a db backup dump, into an existing production schema with data, without loosing the existing information? will that generate tons of duplicates? Ie Backup (dev) Current Production Test...
Categories: DBA Blogs

Modify Script.

Tom Kyte - Thu, 2019-01-10 13:46
Hello- I have a script as follows: <code>SELECT s.schoolid sid, s.lastfirst lf, s.grade_level grl, sum(ada.membershipvalue)-sum(ada.attendancevalue) absences, sum(ada.members...
Categories: DBA Blogs

Impdp fails with errors on TEMP and UNDO

Tom Kyte - Thu, 2019-01-10 13:46
Hi, In impdp i have received many times TEMP and UNDO tablespace issue, like unable to extend tablespace. I am import the data into the default permanent tablespace then Why TEMP and UNDO is required. --------- Thanks
Categories: DBA Blogs

Trigger to set a column value

Tom Kyte - Thu, 2019-01-10 13:46
Hi, I have one requirement on trigger. On EMP TABLE it has column Status. If any insert happens then status has to update 1 , If any updates done the status 2. We have to write in one Trigger. Could you please write a trigger to that...
Categories: DBA Blogs

Adding a check constraint to existing table to avoid both columns to be null and allow all other 3 combinations

Tom Kyte - Thu, 2019-01-10 13:46
I have a requirement that has below output: Col1 col2 NOTNULL NOTNULL(my table already has this data and I want to keep it) NULL NOTNULL(my table already has this data and I want to keep it) NOTNULL NULL (my table already has this ...
Categories: DBA Blogs

Error ORA-01950 on trying to use ALTER TABLE for modifying partitioning scheme

Tom Kyte - Thu, 2019-01-10 13:46
Hello All, I am trying to perform some tests in LiveSQL related to changing the partitioning scheme of a table in 18c. Based on an example from the 18c documentation, I successfully created a table and several indexes, but when I attempted to ...
Categories: DBA Blogs

Robinson Jeffers

Greg Pavlik - Thu, 2019-01-10 12:25
Today is the birthday of one of the most under-rated American poets (in my view, one of the best we have produced), the builder of Tor House and Hawk Tower, which can still be visited in Carmel, California. A timely documentary on an American genius.


The website for Tor House visits, a fascinating experience:

http://www.torhouse.org/

Automated Generation For OCI IAM Policies

OTN TechBlog - Thu, 2019-01-10 10:58

As a cloud developer evangelist here at Oracle, I often find myself playing around with one or more of our services or offerings on the Oracle Cloud.  This of course means I end up working quite a bit in the Identity and Access Management (IAM) section of the OCI Compute Console.  It's a pretty straightforward concept, and likely familiar if you've worked with any other cloud provider.  I won't give a full overview here about IAM as it's been covered plenty already and the documentation is concise and easy to understand.  But one task that always ends up taking me a bit longer to accomplish than I'd like it to is IAM policy generation.  The policy syntax in OCI is as follows:

Allow <subject> to <verb> <resource-type> in <location> where <conditions>

Which seems pretty easy to follow - and it is.  The issue that I often have though is actually remembering the values to plug in for the variable sections of the policy.  Trying to remember the exact group name, or available verbs and resource types, as well as the exact compartment name that I want the policy to apply to is troublesome and usually ends up with me opening two or three tabs to look up exact spellings and case and then flipping over to the docs to get the verb and resource type just right.  So, I decided to do something to make my life a little easier when it comes to policy generation and figured that I'd share it with others in case I'm not the only one who struggles with this.  

So, born out of my frustration and laziness, I present a simple project to help you generate IAM policies for OCI.  The tool is intended to be run from the command line and prompts you to make selections for each variable.  It gives you choices of available options based on actual values from your OCI account.  For example, if you choose to create a policy targeting a specific group, the tool gives you a list of your groups to choose from.  Same with verbs and resource types - the tool has a list of them built in and lets you choose which ones you are targeting instead of referring to the IAM policy documentation each time.  Here's a video demo of the tool in action:

The code itself isn't a masterpiece - there's hardcoded values for verbs and resource types because those aren't exposed via the OCI CLI or SDK in anyway.  But it works, and makes policy generation a bit less painful.  The code behind the tool is located on GitHub, so feel free to submit a pull request to keep the tool up to date or enhance it in any way.  It's written in Groovy and can be run as a Groovy script, or via java -jar.  If you'd rather just get your hands on the binary and try it out, grab the latest release and give it a shot.

The tool uses the OCI CLI behind the scenes to query the OCI API as necessary.  You'll need to make sure the OCI CLI is installed and configured on your machine before you generate a policy.  I decided to use the CLI as opposed to the SDK in order to minimize external dependencies and keep the project as light as possible while still providing value.  Besides, the OCI CLI is pretty awesome and if you work with the Oracle Cloud you should definitely have it installed and be familiar with it.

Please check out the tool and as always, feel free to comment below if you have any questions or feedback.

San Francisco Giants and Oracle Announce “Oracle Park”

Oracle Press Releases - Thu, 2019-01-10 10:00
Press Release
San Francisco Giants and Oracle Announce “Oracle Park” New Partnership and Naming Rights Agreement to Fuel Innovative Fan Experience

Redwood Shores, Calif.—Jan 10, 2019

The San Francisco Giants and Oracle today announced they have signed a 20-year partnership providing Oracle with the naming rights to the ballpark through 2038. Beginning today, AT&T Park will be named Oracle Park. Financial terms of the agreement were not disclosed.

“We are thrilled to welcome Oracle as our naming rights partner as we move into our next decade here in China Basin,” said Giants President and CEO Laurence M. Baer. “While there were several national and local companies interested in the opportunity, Oracle—a longstanding partner of the Giants—was a perfect fit because of its deep roots in the Bay Area, its position as a global leader in technology and innovation, and its shared commitment to community values of diversity and inclusion, sustainability, education and philanthropy.  We look forward to engaging in a model partnership.”

“We are extremely proud that one of the best and most storied ballparks in America will now be called Oracle Park. The Giants have always been on the forefront of bringing innovative experiences to baseball, and we are excited to continue that tradition,” said Mark Hurd, CEO of Oracle. “Together we will create an incredible fan experience and develop programs to engage and impact the community in new ways.”

The Giants’ previous naming rights agreement with AT&T ran through the end of 2019. However, in preliminary renewal discussions, AT&T informed the Giants that changes to AT&T’s corporate sponsorship strategy would give the Giants the opportunity to seek another naming rights partner and begin a new agreement one year early. Oracle immediately stepped up when the Giants proposed the naming rights opportunity and the two parties quickly agreed to terms over the holidays in order to prepare for the 2019 baseball season. 

“I want to thank AT&T for a truly exceptional partnership. Their support of this facility and their ongoing investment over the past two decades has played a major role in the unprecedented success and popularity of our home,” continued Baer. “We are proud to have hosted the 2002, 2010, 2012 and 2014 World Series here, the 2007 All Star Game and set the National League sellout streak record of 530 games from 2010 to 2017.”

Highlights of the agreement include:

  • Ballpark Capital & Technology Upgrades: Oracle will partner with the Giants to invest in a number of improvements to Oracle Park over the next five years, including the addition of a new state-of-the-art scoreboard and signage. The Giants and Oracle will also utilize emerging technologies to create unique experiences for fans. Additional ballpark upgrades will be announced in the coming weeks.
  • Community Programs: The Giants and Oracle will develop a signature community outreach program.
  • Hospitality and Experiential: Together Oracle and the Giants will build unique sports hospitality experiences to engage Oracle customers and members of the community.
  History of Giants and Oracle Partnership

Oracle and the Giants have a longstanding partnership. Together the two companies have embarked on numerous promotional, advertising and philanthropic efforts. For example, for the past 15 years, Oracle sponsored the Giants Community Spotlight, an in-game scoreboard feature, which raises awareness for issues and causes important to the Bay Area community and highlights the impactful work being done by many of the Giants’ non-profit partners.

Oracle has been a proud sponsor of Giants Enterprises—the wholly-owned subsidiary of the Giants responsible for the non-baseball and special events for the organization. The annual Oracle OpenWorld CloudFest concerts have been hosted at the stadium for the past three years.

Giants Enterprises also delivered the Official Spectator Experience for the 34th America’s Cup held in San Francisco in 2013, which Oracle Team USA won. Giants Enterprises commercialized the three-month international sailing event for the first time in its 160-year history by creating shore-line ticketed products, producing customized hospitality packages, developing a spectator-boat license program and managing end-to-end sales, customer service and event execution. An integrated marketing campaign delivered sold-out dates, record-breaking sales and incremental revenue through dynamic pricing.

Contact Info
Jessica Moore
Oracle
650-506-3297
jessica.moore@oracle.com
Staci Slaughter
San Francisco Giants
415-972-1960
sslaughter@sfgiants.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About the San Francisco Giants

One of the oldest teams in Major League Baseball, the 137-year old franchise moved to San Francisco from New York in 1958. After playing a total of 42 years in Seals Stadium and Candlestick Park, the team moved to the privately constructed, downtown ballpark on the corner of 3rd and King. 2019 will mark the Giants 20th season playing on the shores of McCovey Cove at the newly named Oracle Park. The organization is widely recognized for its innovative business practices and baseball excellence. In 2010, the franchise was named the Sports Organization of the Year by Street & Smith’s Sports Business Journal and in 2012 was named Organization of the Year by Baseball America. Oracle Park is also the only ballpark in the country to have earned Silver, Gold and Platinum LEED certification for an existing building.

Since opening its gates, Oracle Park has become internationally-renowned as a premier venue in the world of both sports and entertainment. On the diamond, more than 59 million spectators have witnessed a number of magical moments, including three World Series Championships (2010, 2012 & 2014), the raising of four National League Pennants and seven playoff appearances. On June 13, 2012, the organization’s first-ever Perfect Game was thrown by Giants ace Matt Cain. On July 10, 2007, San Francisco was the center of the baseball universe when it hosted the 78th Major League Baseball All-Star Game. The ballpark has played host to some of music’s biggest acts, including Lady Gaga, Beyoncé & Jay Z, Ed Sheeran, the Rolling Stones, the Eagles, Bruce Springsteen and the E-Street Band, Green Day and Billy Joel. It also was the site of the 2018 Rugby World Cup Sevens.

Off the field, the Giants have one of the premier community outreach programs in professional sports. Through community outreach programs, the Giants and the Giants Community Fund work with corporate and non-profit partners to raise awareness, educate and generate interest in a variety of issues important to both their fans and community. These issues include education/literacy, violence prevention, health and youth recreation and fitness. The Giants Community Fund’s Junior Giants Baseball Program received the 2015 Commissioner’s Award for Philanthropic Excellence and the San Francisco Giants were named ESPN Sports Humanitarian Team of the Year in July of 2016.One of the oldest teams in Major League Baseball, the 137-year old franchise moved to San Francisco from New York in 1958. After playing a total of 42 years in Seals Stadium and Candlestick Park, the team moved to the privately constructed, downtown ballpark on the corner of 3rd and King. 2019 will mark the Giants 20th season playing on the shores of McCovey Cove at the newly named Oracle Park. The organization is widely recognized for its innovative business practices and baseball excellence. In 2010, the franchise was named the Sports Organization of the Year by Street & Smith’s Sports Business Journal and in 2012 was named Organization of the Year by Baseball America. Oracle Park is also the only ballpark in the country to have earned Silver, Gold and Platinum LEED certification for an existing building.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jessica Moore

  • 650-506-3297

Staci Slaughter

  • 415-972-1960

Things Remembered Personalizes Customer Experience with Oracle Commerce Cloud

Oracle Press Releases - Thu, 2019-01-10 06:55
Press Release
Things Remembered Personalizes Customer Experience with Oracle Commerce Cloud Oracle helps specialty retailer increase sales, expand customer choice and scale business

Redwood Shores, Calif.—Jan 10, 2019

Things Remembered, the leading North American retailer of personalized merchandise and experiences, has selected Oracle Commerce Cloud to deliver a seamless customer experience online and in-store. With Oracle Commerce Cloud, Things Remembered has been able to take advantage of the cloud to deliver an improved omnichannel experience with the ability to scale in preparation for the Holiday season last year.

“To ensure we can always deliver the best possible customer experience, we needed a flexible and scalable technology platform that would allow us to stay ahead of rapidly changing customer behaviors and expectations,” said Surya Koppera, associate vice president ecommerce, Things Remembered. “With Oracle Commerce Cloud, we have been able to enhance our website performance, improve the checkout experience and quickly launch innovative new services. This has made a huge difference to our business by helping us to support an increase in traffic and sales. And as expected, we didn’t experience any issues with website performance during the 2018 Holiday season.”

Things Remembered is a well-known retailer specializing in personalized gifts and engraving with more than 400 stores across the United States and Canada. To meet increasing customer expectations and to give customers the choice to shop when and where they want, Things Remembered selected Oracle Commerce Cloud. With Oracle Commerce Cloud, Things Remembered is able to unify traditional and online channels by offering home delivery and same-day pickup in-store. Additionally, to support increasing demand during the Holiday shopping season, Oracle Commerce Cloud enabled Things Remembered to improve site performance. During performance testing, the company was able to support a 300% increase of online volume versus its historical highest volume at peak shopping times with the new platform.

“Hundreds of retailers are turning to Oracle Commerce Cloud to personalize customer experiences and drive tangible business results,” said Katrina Gosek, senior director, digital experience product strategy, Oracle. “Things Remembered is a great example of how an in-store retailer is utilizing technology to create a fresh and rewarding customer experiences across channels.”

Oracle Commerce Cloud is part of Oracle Customer Experience (CX) Cloud, which empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences and outcomes, Oracle CX Cloud Suite helps customers reduce IT complexity, deliver innovative customer experiences and achieve predictable and tangible business results.

For additional information about Oracle CX, follow @OracleCX on Twitter, LinkedIn and Facebook or visit SmarterCX.com. Learn more about Oracle Retail here.

Contact Info
Kimberly Guillon
Oracle
209.601.9152
kim.guillon@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kimberly Guillon

  • 209.601.9152

Getting started Workshops for Autonomous DW / TP

Oracle Autonomous Database is built around the market leading Oracle database and comes with fully automated data warehouse edition & Online Transactions processing edition with specific features...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator