Total Pageviews

Thursday 4 October 2012

Thread 1 cannot allocate new log



Thread 1 cannot allocate new log, sequence 1466

Checkpoint not complete

Current log# 3 seq# 1465 mem# 0: /…/…redo03.log


It might say Archival required instead of Checkpoint not complete, but the effect is pretty much the same. This is really something the DBA should be looking out for. This message will be written to alert.log on the server whenever the database attempts to reuse an online redo log file and finds that it can’t. This happens when DBWR has not yet finished checkpointing the data protected by the redo log or ARCH has not finished copying the redo log file to the archive destination.

 At this point, the database effectively halts as far as the end user is concerned. It stops cold. DBWR or ARCH will be given priority to flush the blocks to disk. Upon completion of the checkpoint or archival, everything goes back to normal.

The reason the database suspends user activity is that there is simply no place to record the changes the users are making. Oracle is attempting to reuse an online redo log file, but because either the file would be needed to recover the database in the event of a failure (Checkpoint not complete), or the archiver has not yet finished copying it (Archival required), Oracle must wait (and the end users will wait) until the redo log file can safely be reused.

If you see that your sessions spend a lot of time waiting on a log file switch,” “log buffer space,” or “log file switch checkpoint or archival incomplete,” you are most likely hitting this. You will notice it during prolonged periods of database modifications if your log files are sized incorrectly, or because DBWR and ARCH need to be tuned by the DBA or system administrator. I frequently see this issue with the “starter” database that has not been customized. The “starter” database typically sizes the redo logs far too small for any significant amount of work (including the initial database build of the data dictionary itself). As soon as you start loading up the database, you will notice that the first 1,000 rows go fast, and then things start going in spurts: 1,000 go fast, then hang, then go fast, then hang, and so on. These are the indications you are hitting this condition.


There are a couple of things you can do to solve this issue:

  • Make DBWR faster. Have your DBA tune DBWR by enabling ASYNC I/O, using DBWR I/O slaves, or using multiple DBWR processes. Look at the I/O on the system and see if one disk or a set of disks is “hot” and you need to therefore spread the data out.The same general advice applies for ARCH as well. The pros of this are that you get “something for nothing” here—increased performance without really changing any logic/structures/code. There really are no downsides to this approach.
  • Add more redo log files. This will postpone the Checkpoint not complete in some cases and, after a while, it will postpone the Checkpoint not complete so long that it perhaps doesn’t happen (because you gave DBWR enough breathing room to checkpoint). The same applies to the Archival required message. The benefit of this approach is the removal of the “pauses” in your system. The downside is it consumes more disk, but the benefit far outweighs any downside here.
  • Re-create the log files with a larger size. This will extend the amount of time between the time you fill the online redo log and the time you need to reuse it. The same applies to the Archival required message, if the redo log file usage is“bursty.” If you have a period of massive log generation (nightly loads, batch processes) followed by periods of relative calm, then having larger online redo logs can buy enough time for ARCH to catch up during the calm periods. The pros and cons are identical to the preceding approach of adding more files. Additionally, it may postpone a checkpoint from happening until later, since checkpoints happen at each log switch (at least), and the log switches will now be further apart.
  • Make checkpointing happen more frequently and more continuously. Use a smaller block buffer cache (not entirely desirable) or various parameter settings such  as FAST_START_MTTR_TARGET, LOG_CHECKPOINT_INTERVAL, and LOG_CHECKPOINT_TIMEOUT. This will force DBWR to flush dirty blocks more frequently. The benefit to this approach is that recovery time from a failure is reduced. There will always be less work in the online redo logs to be applied. The downside is that blocks may be written to disk more frequently if they are modified often. The buffer cache willnot be as effective as it could be, and it can defeat the block cleanout mechanism.


2 comments:

  1. Hello! It was so amazing to visit your personal blog and especially to read this blog entry. And there is one thing I would like to ask you. What is your attitude towards guest blogging?

    ReplyDelete
  2. Jadwal Tarung Ayam SV388 17 Februari 2019 di Situs Judi Sabung Ayam Online Melalui Agen Resmi Taruhan Sabung Ayam Live Asli Thailand.

    Bandar Sabung Ayam - Minggu, 17 Februari 2019 – Pada Hari Tersebut Akan Di Laksanakan Berbagai Pertandingan Sabung Ayam Secara Live di Arena Sabung Ayam Thailand.

    Untuk Info Lebih Lanjut Bisa Hub kami Di :
    wechat : bolavita
    line : cs_bolavita
    whatsapp : +628122222995
    BBM: BOLAVITA

    ReplyDelete