Powermock : MockClassLoader Issue with Log4j

When running a PowerMock Test Log4j gives me the following (or something similar) error:-

log4j:ERROR A "org.apache.log4j.xml.DOMConfigurator" object is not assignable to a "org.apache.log4j.spi.Configurator" variable.
log4j:ERROR The class "org.apache.log4j.spi.Configurator" was loaded by
log4j:ERROR [org.powermock.core.classloader.MockClassLoader@14a55f2] whereas object of type
log4j:ERROR "org.apache.log4j.xml.DOMConfigurator" was loaded by [sun.misc.Launcher$AppClassLoader@92e78c].
log4j:ERROR Could not instantiate configurator [org.apache.log4j.xml.DOMConfigurator].

or

Caused by: org.apache.commons.logging.LogConfigurationException:Invalid class loader hierarchy.  
You have more than one version of 'org.apache.commons.logging.Log' visible, which is not allowed.

There are a couple of different solutions to this:
# Upgrade to PowerMock 1.3+
# Make use of the @PowerMockIgnore annotation at the class-level of the test.
For example if using log4j, use `@PowerMockIgnore(“org.apache.log4j.*”)` if using commons logging, use `@PowerMockIgnore(“org.apache.commons.logging.*”)`.

Reference URL :-
http://powermock.googlecode.com/svn-history/r2047/wiki/FAQ.wiki

How to handle special character (such as ‘&’) in Oracle?

Following are the few options that could be tried to handle special characters in Oracle:-

Case Study 1

SELECT a.employee_id,
a.employee_code,
a.employee_display_name,
replace(a.employee_display_name,'&'||'reg;',NULL) updated_emp_display_name
FROM employee a
WHERE a.employee_id IN (26, 28, 48, 51, 61, 104, 39, 107);

Case Study 2

-- select ASCII('&') from dual
-- select CHR(38) from dual;

SELECT a.employee_id,
a.employee_code,
a.employee_display_name,
replace(a.employee_display_name,chr(38)||'reg;',NULL) updated_emp_display_name
FROM employee a
WHERE a.employee_id IN (26, 28, 48, 51, 61, 104, 39, 107);

Case Study 3

Set scan off
-- Setting the scan off tells sqlplus that dont scan for variables in the statements.

SELECT a.employee_id,
a.employee_code,
a.employee_display_name,
replace(a.employee_display_name,'®',NULL) updated_emp_display_name
FROM employee a
WHERE a.employee_id IN (26, 28, 48, 51, 61, 104, 39, 107);

Case Study 4

set define off

SELECT a.employee_id,
a.employee_code,
a.employee_display_name,
replace(a.employee_display_name,'®',NULL) updated_emp_display_name
FROM employee a
WHERE a.employee_id IN (26, 28, 48, 51, 61, 104, 39, 107);

set define on

How to Lock Table in Oracle

While working on a requirement, we have to raise the ReadTimeOut condition from one environment to another environment. We have followed the Table-Level lock operation in Oracle and hence achieved the functionality accordingly. Following is the brief information on the same:-

PL/SQL Developer Session-1:-

-- Creating Data-Set
create table emp (emp_id number, emp_name varchar2(20));
insert into emp values (1,'vipin’);
insert into emp values (2,'vinay’);
select * from emp;
commit;
-- Obtaining Table-Level Lock
lock table emp in exclusive mode nowait;
rollback; -- A locked table remains locked until we either commit our transaction or roll it back, either entirely or to a savepoint before you locked the table.

PL/SQL Developer Session-2:-

---- Verifying and Creating the Data-Set
select * from emp;
insert into emp values (3,'ananya'); -- insert statement will not get completed until or unless the lock is not released from another session.
commit;

Reference:-
https://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9015.htm

Achieve Network Encryption in WebSphere for Oracle

To enable the Network Encryption in WebSphere, we need to do configuration in custom properties of Data-Sources.

WebSphere Pooled Oracle Data-Source
In WebSphere console under JDBC providers > Oracle JDBC Driver > Data sources >
(1) Click on Custom Properties
(2) Add a following new property
Name: connectionProperties
Value: oracle.net.encryption_client=REQUESTED;oracle.net.encryption_types_client=RC4_128

Code Snippet:-

...
Properties prop = new Properties();
prop.put("oracle.net.encryption_client", "REQUIRED");
prop.put("oracle.net.encryption_types_client", "( RC4_128 )");
//prop.put("oracle.net.crypto_checksum_client", "REQUESTED");
//prop.put("oracle.net.crypto_checksum_types_client", "( MD5 )");

OracleDataSource ods = new OracleDataSource();
ods.setProperties(prop);
ods.setURL("jdbc:oracle:thin:@localhost:1521:main");
Connection conn = ods.getConnection();
...

Reference:-
https://docs.oracle.com/cd/B19306_01/java.102/b14355/clntsec.htm#EHAFHEIG

ORA-00979: not a GROUP BY expression

While working on a Critical Issue, it comes a scenario wherein following exception is being obtained “ORA-00979: not a GROUP BY expression” in Oracle

After doing an analysis it has been observed that a redacted column cannot be used in a SQL statement where such a column has a function/expression applied and is used in a group by clause. If an action is performed then error “ORA-00979: not a GROUP BY expression” will be returned.

Sample example that works

SELECT COLUMN_NAME_1
FROM TABLE_NAME
WHERE COLUMN_NAME_ID BETWEEN 500 AND 600
GROUP BY COLUMN_NAME_1

Sample example that doesn’t works

SELECT SUBSTR(COLUMN_NAME_1, 1, 5)
FROM TABLE_NAME
WHERE COLUMN_NAME_ID BETWEEN 500 AND 600
GROUP BY SUBSTR(COLUMN_NAME_1, 1, 5)

Learning:
Do not include any redacted columns in a SQL expression that is used in a GROUP BY clause in a SQL statement. Oracle does not support this behavior, and raises an ORA-00979: not a GROUP BY expression error. This happens because internally the expression in the SELECT list must be modified by Data Redaction, but this causes it to no longer be found when it comes time to process the GROUP BY clause (which is currently not updated by Data Redaction) leading to this unintended error message.

Reference:
https://docs.oracle.com/cd/E11882_01/network.112/e40393/redaction_guidelines.htm#ASOAG10532
(Loop up the contents under: 7.1 General Usage Guidelines)

Dynamic Sampling in Oracle

Dynamic Sampling (DS) /*+ dynamic_sampling(4) */ was introduced in Oracle Database 9i Release 2 to improve the optimizer’s ability to generate good execution plans. The most common misconception is that DS can be used as a substitute for optimizer statistics. The goal of DS is to augment the optimizer statistics; it is used when regular statistics are not sufficient to get good quality cardinality estimates.

During the compilation of a SQL statement, the optimizer decides whether to use DS or not by considering whether the available statistics are sufficient to generate a good execution plan. If the available statistics are not enough, dynamic sampling will be used. It is typically used to compensate for missing or insufficient statistics that would otherwise lead to a very bad plan. For the case where one or more of the tables in the query does not have statistics, DS is used by the optimizer to gather basic statistics on these tables before optimizing the statement. The statistics gathered in this case are not as high a quality or as complete as the statistics gathered using the DBMS_STATS package. This trade off is made to limit the impact on the compile time of the statement.

begin
   dbms_stats.gather_schema_stats(
      ownname          => 'SCOTT', 
      estimate_percent => dbms_stats.auto_sample_size, 
      method_opt       => 'for all columns size skewonly', 
      degree           => 7
   );
end;
/

The second scenario where DS is used is when the statement contains a complex predicate expression and extended statistics are not available. Extended statistics were introduced in Oracle Database 11g Release 1 with the goal to help the optimizer get good quality cardinality estimates for complex predicate expressions. For example, if you had a simple query that has where clause predicates on two correlated columns, standard statistics would not be sufficient. Consider the following SQL statement and its execution plan :

Explain plan for
     Select *
          From Customers
          Where cust_city='Los Angeles'
          And cust_state_province='CA';

Select * from table(dbms_xplan.display());

With standard statistics the Optimizer estimates the cardinality as 90 rows. The actual number of rows returned by this query is 916. By setting OPTIMIZER_DYNAMIC_SAMPLING to level 4, the optimizer will use dynamic sampling to gather additional information about the complex predicate expression. The additional information provided by dynamic sampling allows the optimizer to generate a more accurate cardinality estimate and therefore a better performing execution plan.

Alter session set optimizer_dynamic_sampling=4;

Explain plan for
     Select *
          From Customers
          Where cust_city='Los Angeles'
          And cust_state_province='CA';

Select * from table(dbms_xplan.display());

You may now be wondering why we had to set the parameter OPTIMIZER_DYNAMIC_SAMPLING to 4 . Dynamic sampling is controlled by the parameter OPTIMIZER_DYNAMIC_SAMPLING, which can be set to different levels (0-10). These levels control two different things; when dynamic sampling kicks in and how large a sample size will be used to gather the statistics. The greater the sample size the bigger impact DS has on the compilation time of a query.

Level When Dynamic Sampling will be used Sample size (blocks)
0 Switches off dynamic sampling N/A
1 At least one non-partitioned table in the statement has no statistics 32
2 (default) One or more tables in the statement have no statistics 64
3 Any statement that meets level 2 criteria and any statement that has one or more expressions used in the where clause predicates e.g. Where substr(CUSTLASTNAME,1,3) or Where a + b =5 64
4 Any statement that meets level 3 criteria and any statement that has complex predicates. An OR or AND operator between multiple predicates on the same table 64
5 Any statement that meets level 4 criteria 128
6 Any statement that meets level 4 criteria 256
7 Any statement that meets level 4 criteria 512
8 Any statement that meets level 4 criteria 1024
9 Any statement that meets level 4 criteria 4086
10 All statements All Blocks

There are two ways to use dynamic sampling:
(1) The OPTIMIZER_DYNAMIC_SAMPLING parameter can be set at the database instance level and can also be overridden at the session level with the ALTER SESSION command.
(2) The DYNAMIC_SAMPLING query hint can be added to specific queries.

Another Example Scenario

Poor cardinality estimates, looking at a quarter of the table data

SQL> select * from t where flag1 = 'N' and flag2 = 'N';

Execution Plan
----------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 17014 |  1744K|   292   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T    | 17014 |  1744K|   292   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
----------------------------------------------------

   1 - filter("FLAG1" = 'N' AND "FLAG2" = 'N')

Above artifacts shows that when the predicate gets just a little more complex, the optimizer misses the estimated cardinality by a huge amount—it doesn’t know about the relationship between FLAG1 and FLAG2.

Good cardinality estimate, looking at only six rows

SQL> select /*+ dynamic_sampling(t 3) */ * from t where flag1 = 'N' and flag2 = 'N';

Execution Plan
-----------------------------
Plan hash value: 470836197

------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    6 |   630 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |    6 |   630 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |    6 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
----------------------------------------------------

   2 - access("FLAG1"='N' AND "FLAG2"='N')

Note the much better row estimate (6 instead of more than 17,000) in Listing 7, compared to Listing 6, and also note the very different (and now optimal) plan. Instead of a full table scan, the optimizer has decided to use the index, and this execution plan will definitely be optimal compared to the original full table scan, because in reality, no data will be returned. The index will discover that instantly.

Cost-Based Optimizer estimates of Join Cardinality

In the absence of column histograms, Oracle CBO must be able to “guess” this information, and it sometimes gets it wrong. This is one reason why the ORDERED hint is one of the most popular SQL tuning hints; using the ORDERED hint allows you to specify that the tables be joined together in the same order that they appear in the FROM clause, like this:

select /+ ORDERED */ student_name
     from student
          natural join course
          natural join professor
     where
          professor = 'Vipin Chauhan'
          and course = 'Oracle'

ORA-01775 Looping chain of Synonyms

ORA-01775 looping chain of synonyms is caused through a series of CREATE synonym statements wherein a synonym was defined that referred to itself.

For example, the following definitions are circular:
CREATE SYNONYM synonym_name1 for synonym_name2
CREATE SYNONYM synonym_name2 for synonym_name3
CREATE SYNONYM synonym_name3 for synonym_name1

To resolve the issue, we need to change one synonym definition so that it applies to a base table or view and retry the operation.

Given below SQL scripts could be used to lookup the synonyms:-

SELECT TABLE_OWNER, TABLE_NAME, DB_LINK FROM DBA_SYNONYMS
   WHERE OWNER = 'PUBLIC' AND SYNONYM_NAME LIKE '%DATA_REQUEST%';

SELECT * FROM ALL_SYNONYMS WHERE SYNONYM_NAME LIKE '%DATA_REQUEST%';

SELECT * FROM ALL_OBJECTS WHERE OBJECT_NAME LIKE '%DATA_REQUEST%';