1. Get rid of all advertisements and get unlimited access to documents by upgrading to Premium Membership. Upgrade to Premium Now and also get a Premium Badge!

regarding row number

Discussion in 'SQL PL/SQL' started by karthikeyanc2003, Oct 7, 2009.

  1. karthikeyanc2003

    karthikeyanc2003 Active Member

    Messages:
    24
    Likes Received:
    0
    Trophy Points:
    100
    hi folks ,

    i have a requirement like insert 1 million data from one table to other table by doing some modification the data using function on table 1. since it takes time client ask me to insert record by 1 to 1 lak , 1 lak 1 to 2 lak etc and commit on insert for each lak record. and i cannt able to get the data for select * from table x where rownum between 100001 and 2000000 .

    let me know how to proceed it

    Thanks in advance..

    Karthikeyan C
     
  2. debasisdas

    debasisdas Active Member

    Messages:
    46
    Likes Received:
    3
    Trophy Points:
    90
    Location:
    Bangalore, India
    1.create a view on the table using the function on the source table
    2.disable all the indexes on the target table
    3.insert into target table by selecting from the view.
    4.rebuild the indexes on the target table.
     
  3. Sikkandar.S.P

    Sikkandar.S.P Active Member

    Messages:
    16
    Likes Received:
    0
    Trophy Points:
    80
    Location:
    Chennai

    Instead of inserting into Destination table, Can you try Creating the table using Parallelism by choosing a good degree level (say at 10) and UNRECOVERABLE Option?

    This would be very fast as it involves only few redo logs generation!

     
  4. zargon

    zargon Community Moderator Forum Guru

    Messages:
    2,345
    Likes Received:
    347
    Trophy Points:
    1,430
    Location:
    Aurora, CO
    This requirement is illogical with respect to how Oracle processes data. And as you've discovered ROWNUM is not a valid restricting criteria because it's assigned starting with 1 for all records which meet the WHERE criteria. You could also use bulk collect and forall to insert into the table:

    Code (SQL):
    DECLARE
       TYPE idTyp IS TABLE OF driver%rowtype
           INDEX BY binary_integer;
       ids idTyp;
       cursor c IS SELECT id, val FROM driver;
    BEGIN
       OPEN c;
       loop
         fetch c  bulk collect INTO ids LIMIT 1000000;
         exit WHEN ids.COUNT = 0;
         forall i IN ids.FIRST..ids.LAST
            INSERT INTO temp_1 VALUES ids(i);
       END loop;
       close c;
    END;
    /
    Notice that you're fetching 1 million rows per 'batch' and inserting the batch into the table without trying to use ROWNUM (which you know won't work as you have it written). Using BULK COLLECT and FORALL is probably more efficient than your attempt at using plain SQL.
     
  5. zargon

    zargon Community Moderator Forum Guru

    Messages:
    2,345
    Likes Received:
    347
    Trophy Points:
    1,430
    Location:
    Aurora, CO
    Which doesn't solve the problem as the destination table still needs the data inserted into it and this appears to be an ongoing process (one data load builds upon the previous set of inserts). The issue isn't the redo log generation, it's the speed of the inserts and the mechanism to 'page' the data into 1,000,000 row 'chunks'. The PL/SQL example I provided, utilizing BULK COLLECT and FORALL, does address these issues. Modifying the insert statement slightly would also improve the speed (slightly):

    Code (SQL):
    DECLARE
       TYPE idTyp IS TABLE OF driver%rowtype
           INDEX BY binary_integer;
       ids idTyp;
       cursor c IS SELECT id, val FROM driver;
    BEGIN
       OPEN c;
       loop
         fetch c  bulk collect INTO ids LIMIT 1000000;
         exit WHEN ids.COUNT = 0;
         forall i IN ids.FIRST..ids.LAST
            INSERT /*+ append */ INTO temp_1 VALUES ids(i);
       END loop;
       close c;
    END;
    /
    The /*+ append */ hint bypasses the PCTFREE/PCTUSED mechanism and avoids checking the freelists for available space by populating new data blocks located at the end of the current data decreasing the time required for the inserts (slightly).
     
  6. zargon

    zargon Community Moderator Forum Guru

    Messages:
    2,345
    Likes Received:
    347
    Trophy Points:
    1,430
    Location:
    Aurora, CO
    And slow the process considerably by querying a query rather than a table? Using PL/SQL with BULK COLLECT and FORALL would be more efficient.

    One cannot disable indexes, one disables constraints. The indexes would be dropped to prevent the subsequent updates and that act may not be necessary as the performance 'boost' may be negligible.

    Again, why create a view to further increase the processing time?

    Which may not be necessary as the indexes would not have been 'disabled'.