Friday, September 30, 2005

PL/SQL Procedure Call Overhead

Is there much overhead in calling PL/SQL procedures?

I assume that if the answer is "yes," you'll want to avoid procedure calls, which would likely mean making your procedures bigger (by combining several into one). That makes me shudder because clean, modular code is easier to read and maintain, not to mention making it easier to develop new code if its based on reliable, tested code.

I assume there is at least some overhead to calling PL/SQL procedures. I mean, if the procedure is not in the cache, you'll obviously have to go the disk to fetch it.

If it's already in memory, there could still be some overhead in the passing of parameters. UNLESS you can use the "NOCOPY" hint, that is.

http://thinkoracle.blogspot.com/2005/05/nocopy-hint.html

But to be honest, I don't know how much overhead any particular procedure call will have. Sorry to those that read the title and hoped that the content would contain the definitive answer. I might have something more conclusive after doing some research. In the meantime, here is what I do every time I have a question: I test it.

Even if I showed you a quote in a book that shows you how to calculate the overhead, I would STILL advise testing it. Documents can be out-of-date, misunderstood and just plain wrong. You need to test it.

Here is how to test it.

1. Write (or identify) a stored procedure that reflects your business requirements.

CREATE OR REPLACE PROCEDURE DoIt
IS
BEGIN
NULL;
-- Do some stuff in here!
END DoIt;

2. Now split that stored procedure into two (or more) parts, and a master proc

CREATE OR REPLACE PROCEDURE DoItPartOne
IS
BEGIN
NULL;
-- Do part of the stuff here...
END DoItPartOne;

...etc!

CREATE OR REPLACE PROCEDURE DoItInParts
IS
BEGIN
DoItPartOne;
-- DoItPartTwo;
-- etc...
END DoItInParts;

3. With stats on, call that first stored procedure that does everything, and then run TKPROF to analyse it.

ALTER SESSION SET SQL_TRACE = TRUE;
EXEC DoIt;
ALTER SESSION SET SQL_TRACE = FALSE;

TKPROF robert_ora_3028.trc robert_ora_3028.prf explain='sys/******** as sysdba'

More on gathering and analysing simple performance statistics:
http://thinkoracle.blogspot.com/2005/09/analyzing-query-performance.html

4. With stats on, call that second stored procedure.

ALTER SESSION SET SQL_TRACE = TRUE;
EXEC DoItInParts;
ALTER SESSION SET SQL_TRACE = FALSE;

You may find, as I did, that it is very hard to set up a test that reveals any kind of noticeable performance overhead. But if the procedures were spread out over the disk and not in the cache, or if there were lots and lots of parameters, I bet we could see some overhead. But if your procedure is called often enough for it to be important to you, the procedures would probably be in the cache at any given time.

But don't spend too much time in conjecture, and even when I do produce some facts, set up your tests anyway.

Thursday, September 22, 2005

Column Name as a Variable

Consider the situation where you are writing a stored procedure that takes a column name as a variable, and then does some work based on a query that uses that column name. How would you do it?

Let's consider a hypothetical situation. Say you have a table with all your employees. Some of the columns are responsible for their pay. Employees can get paid in different ways, for example: base salary, hourly wage, bonus, dividend, etc. You have made each one of these a separate column in the table. (Note: all these columns are of the same type).

You have a number of stored procedures that access these tables. They all share some things in common, so you have decided to make some common "helper" procedures for all the "master" procedures to use.

Your "helper" procedure would have to take the column name from the "master" procedure, and then perform the common queries and data manipulations for that given column.

1. Dynamic SQL

One way of doing that is with dynamic SQL. Observe:

CREATE OR REPLACE PROCEDURE HelperProc (
in_col_name IN VARCHAR2,
out_val OUT NUMBER)
IS
BEGIN
EXECUTE IMMEDIATE 'SELECT MAX(' || in_col_name || ') FROM EMP' INTO out_val;
END;

SET SERVEROUTPUT ON;

DECLARE
out_val NUMBER;
BEGIN
HelperProc('EMPNO', out_val);
DBMS_OUTPUT.PUT_LINE(out_val);
END;

It works very well:
- It's a single line, no matter how many possible columns are used
- You don't need to know the column names in advance
- You don't need to change it after a DDL change

However, there are drawbacks to Dynamic SQL. Among others, there is extra parsing and (most seriously) vulnerabilities to SQL injection. I won't go into more detail on Dynamic SQL, but I promise to blog on it soon.

2. Static SQL

The obvious recourse is to use something like IF or CASE or (my favourite) DECODE.

CREATE OR REPLACE PROCEDURE HelperProc (
in_col_name IN VARCHAR2,
out_val OUT NUMBER)
IS
BEGIN
SELECT MAX(DECODE(in_col_name, 'EMPNO', EMPNO, 'MGR', MGR, 'SAL', SAL, 'COMM', COMM, 'DEPTNO', DEPTNO, NULL))
INTO out_val
FROM EMP;
END;

Essentially this is like looking at the column name, and doing something different depending on what it is. That's practically all you can do with static SQL, by definition. This almost defeats the purpose of having a common "helper" procedure, but there are still two reasons it would still make sense:
1. Modularity (and abstraction) is generally a good thing
2. Any extra work done on out_val will justify the "helper" procedure.

3. Revised Data Model

There is an even more important consideration. Whenever you are struggling to do something clever, take a step back and consider your data model. It could be ill-suited for your needs.

In this case, what could we do?

We could break this information into separate tables. For example: EmpBaseSalary, EmpHoury, EmpBonus, etc. Then we could join them to the Emp table by employee id. Of course, that just makes the table name variable instead of the column, so that doesn't really help us, so instead:

We could elongate the employee table, making something like this:

ID;...;SALARY;HOURLY;BONUS;DIVIDEND
1;...;60;NULL;NULL;NULL
2;...;100;NULL;NULL;20

into a separate table mapped by ID:

ID;VALUE;TYPE
1;60;'SALARY'
2;100;'SALARY'
2;20;'DIVIDEND'

That would effectively move the "column name" into the WHERE clause. That would certainly make the task easier. That is sort of a "reverse pivot."

Also, that opens the door to add extra columns for effective start and end dates. We could even do this with views if we wanted to leave the data model alone.

http://thinkoracle.blogspot.com/2005/07/use-views.html
http://thinkoracle.blogspot.com/2005/09/pivot-and-crosstab-queries.html

That is just one example, but it shows how you need to take a step back and consider the real-world application.

Here is a link to the Dizwell discussion forum where we discussed this, and where most of this came from:
http://www.phpbbserver.com/phpbb/viewtopic.php?t=450&mforum=dizwellforum

Tuesday, September 20, 2005

Wanted: Your Unwanted Oracle/DB Books

Please excuse the spammy post today. But I would like to ask anyone who is reading this who may have unwanted Oracle (8 and up) or General DB (College) books to contact me (email address is in my profile).

Rather than gather dust on your shelf, I can give them a good home. I will pay for shipping costs to Canada, and you will have my heartfelt appreciation.

Many thanks!

Monday, September 19, 2005

PL/SQL Code Storage: Files vs In-DB Packages

I read this interesting exchange on Steven Feuerstein's Q&A:

http://htmldb.oracle.com/pls/otn/f?p=2853:4:1727923121986559057::NO::P4_QA_ID:246

Essentially the question is where to stored your PL/SQL stored procedures. H. Sheehan, Gary Myers, William Robertson, Pete Scott, Scott Swank, A. Nadrian and I discussed this on the Dizwell Forum.

http://www.phpbbserver.com/phpbb/viewtopic.php?t=458&mforum=dizwellforum

To sum up their positions:

Option 1: In organized, packaged files on your DB server
- Fewer security holes
- Handles failover situations better
- Easier to use version-control system
- Available for a greater number of nice PL/SQL editors
- Harder to inadvertantly overwrite source code, leads to greater confidence

Option 2: In-the-db packages
- Greater efficiency (pre-loaded)
- Greater code integrity (shows invalidation)
- Search code in USER_SOURCE table
- Use some PL/SQL tools easier
(Note: there are IDEs that integrate with source control and compile directly into the database)

It seems like the leading camp is Option 2. The advantages of having your packages pre-loaded into the database are just so significant, especially since you should be able to find an IDE that integrates directly with source control and can compile directly into the database. Scott Swank provided this suggestion:

http://www.oracle.com/technology/products/jdev/101/
howtos/extools/subversion.html


One general consensus, however, is this:
- Code should be contained in packages
- These packages should be wrapped.

Wednesday, September 14, 2005

Analyzing Query Performance

Alternate title: Keeping Tables Small, Revisited

In an earlier article I spoke about how removing old data can help speed up table scans:

http://thinkoracle.blogspot.com/2005/08/keeping-tables-small.html

During a test in that article, I seemed to detect that querying a view composed of the 90/10 split of a large table was much faster than querying that table directly.

I was only trying to demonstrate that it wouldn't be much slower, I did not expect for it to be faster. I didn't pursue it at the time, but reproduced those results in 2 separate tests later on.

Incidentally, David Aldridge, who inspired my original article, has a theory on this:
http://oraclesponge.blogspot.com/2005/08/more-on-partition-not-quite-pruning.html

So the greater question was:
"How do you determine why a query is faster (or slower) than you expected?"

The first step is to use SQL Trace and TKProf:
http://download-west.oracle.com/
docs/cd/B10501_01/server.920/a96533/sqltrace.htm#1018


Note: there are MANY sources of information on this. Apart from the Oracle documentation, I also used articled by Tom Kyte, as well as his book "Expert One-on-One Oracle."

Here was my test.

1. Set some variables:

ALTER SESSION SET TIMED_STATISTICS=true;
ALTER SYSTEM SET MAX_DUMP_FILE_SIZE=1000;
ALTER SYSTEM SET USER_DUMP_DEST="C:/temp/trace";

2. Create the ReallyBigTable

CREATE TABLE ReallyBigTable AS SELECT * FROM ALL_OBJECTS;

3. Turn on tracing

ALTER SESSION SET SQL_TRACE = TRUE;

4. Run the query

SELECT SUM(object_id) FROM ReallyBigTable
WHERE object_id * 2 NOT IN
(SELECT object_id FROM ReallyBigTable);

Note: 00:44:50.07

5. Turn off tracing

ALTER SESSION SET SQL_TRACE = FALSE;

6. Run TKPROF (separate window)

TKPROF robert_ora_3236.trc robert_ora_3236.prf explain='sys/******** as sysdba'
- Save that file somewhere (it will be overwritten later)

7. Create Archive and Active tables.

CREATE TABLE ReallyBigTable_Archive AS SELECT * FROM ReallyBigTable
WHERE object_id < 40000;

CREATE TABLE ReallyBigTable_Active AS SELECT * FROM ReallyBigTable
WHERE object_id >= 40000;

8. Drop ReallyBigTable

DROP TABLE ReallyBigTable;

9. Create the view

CREATE VIEW ReallyBigTable AS
SELECT * FROM ReallyBigTable_Archive
UNION ALL
SELECT * FROM ReallyBigTable_Active;

10. Turn on Tracing

ALTER SESSION SET SQL_TRACE = TRUE;

11. Run the query again

SELECT SUM(object_id) FROM ReallyBigTable
WHERE object_id * 2 NOT IN
(SELECT object_id FROM ReallyBigTable);

Elapsed: 00:45:21.04

12. Turn off tracing

ALTER SESSION SET SQL_TRACE = FALSE;

13. Run TKPROF (separate window)

TKPROF robert_ora_3236.trc robert_ora_3236.prf explain='sys/******** as sysdba'

Conclusion:

I repeated the test 3 times with tracing on, and each time I could not reproduce the results. I saw virtually no difference in time elapsed between querying a big table, and querying a big table

So I guess we're left in the dark as to why querying the view was so much faster during my earlier tests. Perhaps we can apply Occam's Razor and the safest conclusion was simply that I goofed.

Either way, it made for an interesting article of how to generate performance data and query plans. I will leave you with an excerpt from the TKPROF output:

SELECT SUM(object_id) FROM ReallyBigTable
WHERE object_id * 2 NOT IN
(SELECT object_id FROM ReallyBigTable)

call count cpu elapsed disk query
------- ------ -------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0
Execute 1 0.00 0.00 0 0
Fetch 2 612.03 2690.60 16168915 17030020
------- ------ -------- ---------- ---------- ----------
total 4 612.04 2690.60 16168915 17030020

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
20495 FILTER
40764 TABLE ACCESS FULL REALLYBIGTABLE
20269 TABLE ACCESS FULL REALLYBIGTABLE


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
1 SORT (AGGREGATE)
20495 FILTER
40764 TABLE ACCESS (FULL) OF 'REALLYBIGTABLE'
20269 TABLE ACCESS (FULL) OF 'REALLYBIGTABLE'

Monday, September 12, 2005

20 Oracle Lessons

I started using Oracle with version 8 in 1999. After a few years I changed companies to a Sybase/SQL-Server shop. But the past year has found me back working with Oracle, this time version 8, and 9.

It has been an interesting time getting myself back into "game shape" with Oracle, and digging into version 9. If you've been reading this blog, you've been able to follow along with me in my adventures.

I decided this was as good a time as any to pause and reflect on some of the lessons I've learned in this past year.

Oracle:
1. Oracle is very complex.
I always thought "a database is a database" but Oracle is about 4 times as complex as Sybase/MS-SQL.

2. Fortunately Oracle is well-documented.
I have fallen in love with Oracle documentation. Clear, well-written, comprehensive and lots of examples.

http://thinkoracle.blogspot.com/2005/07/oracle-docs.html

3. There are many on-line Oracle sites and forums to find help.
The primary benefit of a popular product is that whatever your mistake is, you're probably not the first person to experience it. There is a huge on-line Oracle community, and so many places to search for help. My favourite links and blogs are kept current on this site.

Testing:
4. It's quick, free and very easy to set up a personal Oracle database on your Windows PC for testing purposes.

5. Build proper test cases, and test everything you read before your implement.
This is part of my personal style that I apply to all my work, regardless of technology. But I feel it is especially true of a database as complex as Oracle. Especially with all the different versions out there.

6. Burleson Consulting makes a lot of mistakes.
So test those even more carefully. But remember, even the stellar Tom Kyte makes mistakes.

http://thinkoracle.blogspot.com/2005/06/expert-one-on-one.html

Coding/Modelling Practises:
7. Document your code.
It makes it easier to reuse good code, and fix bad code. Again, this is my personal style that I apply to all project, regardless of technology. Do the best I can, and explain what I'm doing.

8. Specify column/parameter names when writing queries or making stored procedure calls.

http://thinkoracle.blogspot.com/2005/07/specifying-insert-columns.html

9. NULLs are very special
This is one of my favourite topics. Casual database programmers might not be aware of all the special cases related to NULLs, making them a common cause of honest (but costly) mistakes.

http://thinkoracle.blogspot.com/2005/05/null-vs-nothing.html
http://thinkoracle.blogspot.com/2005/06/nulls-in-oracle.html
http://thinkoracle.blogspot.com/2005/09/nulls-in-count.html

10. Data integrity is best accomplished in the database layer (as opposed to procedure or application layers).
Why use a sophisticated database like Oracle if you're just going to use it as a data store? Use Oracle's ability to protect your data's integrity, and then you can fear badly written applications a little bit less.

11. Triggers can be used as a form of "advanced constraint" for data integrity.

http://thinkoracle.blogspot.com/2005/07/use-constraints.html

12. Views are very useful in solving complex queries without affecting your data model (among many other uses!)

http://thinkoracle.blogspot.com/2005/07/use-views.html

13. A great way to improve performance is to archive/partition data.

http://thinkoracle.blogspot.com/2005/08/keeping-tables-small.html

14. You can often achieve common table types and procedure parameters by defining user types, using %TYPE and referencing foreign keys. Otherwise, use an advanced SQL modeller to develop your PL/SQL.

http://thinkoracle.blogspot.com/2005/06/common-table-column-types.html

15. Choose carefully between using natural and synthetic keys when designing your tables.

http://thinkoracle.blogspot.com/2005/06/natural-vs-synthetic-keys.html

Useful Oracle (other than REGEXP, in Version 10):

16. DECODE is incredibly useful (CASE WHEN can also be used).
I am a huge fan of DECODE, I can't imagine working without it. It is perhaps poorly named.

http://thinkoracle.blogspot.com/2005/06/decode.html

17. CONNECT BY is useful when you need hierarchical, string aggregation (stragg).

http://thinkoracle.blogspot.com/2005/06/connect-by.html

18. GROUP BY has a lot of relatives to help write queries for complex analytic functions: RANK, GROUPING SETS, GROUPING_ID, ROLLUP

http://thinkoracle.blogspot.com/2005/08/compute.html

19. PL/SQL supports OOP (Object-Oriented Programming)

http://thinkoracle.blogspot.com/2005/06/oop-in-plsql-yep.html

20. Oracle has a lot of handy toolkits
For example UTL_HTTP can be used to achieve very simple screen-scraping (among other things)

http://thinkoracle.blogspot.com/2005/08/utlhttp.html

One more thing that didn't make the list (because it would push the total from an even 20 to an odd 21):
When you're having trouble writing a very clever query or procedure, take a step back and look at your data model. It might be inappropriate for how you're using it.

Thanks for joining me on this stroll down memory lane!

Friday, September 09, 2005

NULLs in COUNT

Quick, easy one, but it trips up beginners. Something of which to be mindful. Observe:

SELECT COUNT (*) FROM EMP;

COUNT(*)
----------
14

SELECT COUNT (COMM) FROM EMP;

COUNT(COMM)
-----------
4


Note: 4 instead of 14. But if we do this:

SELECT COMM FROM EMP;

COMM
----------

300
500

1400




0





14 rows selected.


We get all 14.

This is expected behaviour. From the Oracle SQL Reference: "If you specify 'expr' then COUNT returns the number of rows where 'expr' is not null."

http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540.pdf

COUNT can take virtually any parameter, including 1, *, or a column. But be aware that if you're counting a particular column, NULLs won't be counted.

Additional reading material:

Everyone knows NULL is one of my favourite topics:
http://thinkoracle.blogspot.com/2005/05/null-vs-nothing.html
http://thinkoracle.blogspot.com/2005/06/nulls-in-oracle.html

Eddie Awad on the difference between COUNT(*) and COUNT(1)
http://awads.net/wp/2005/07/06/count-vs-count1/
(Note: see the links in the comments for more information)

Thursday, September 01, 2005

Pivot and Crosstab Queries

Here is another advanced concept that will come in useful when solving Oracle problems.

Imagine you're trying to create a result set where the rows need to be columns, or vice versa. In essence, you need to "pivot" rows into columns, or vice versa. That is a very common requirement, and this is where you need to look at a pivot (or crosstab) query to get the job done.

As always, when you want to understand something, you can start by Asking Tom:
http://asktom.oracle.com/pls/ask/f?p=4950:8:16663421538065257584::NO::
F4950_P8_DISPLAYID,F4950_P8_CRITERIA:766825833740


A simple pivot query is accomplished by basically doing the following:
1. Add some kind of count or row number to your query, if necessary for the grouping
2. Then use your (revised) original query as a sub-query
3. Use "decode" to turn rows into columns (ie. a "sparse" matrix).
4. Use "max" to "squash" the multiple rows you moved to columns, into single rows. Don't forget to group by.
(Note: it gets more complicated if you don't know how many columns you'll need).

Here is another one of Ask Tom's example. It clearly shows how you use decode to create a "sparse" matrix, and then use max to "squash" it down.
http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:124812348063

Let's look a simple example in slow motion.

Here's the data
CREATE TABLE CFL (season NUMBER(4), team VARCHAR2(16), points NUMBER(3));
INSERT INTO CFL (season, team, points) VALUES (2004, 'Argonauts', 21);
INSERT INTO CFL (season, team, points) VALUES (2004, 'Alouettes', 28);
INSERT INTO CFL (season, team, points) VALUES (2004, 'Tiger-Cats', 19);
INSERT INTO CFL (season, team, points) VALUES (2004, 'Renegades', 10);
INSERT INTO CFL (season, team, points) VALUES (2003, 'Argonauts', 18);
INSERT INTO CFL (season, team, points) VALUES (2003, 'Alouettes', 26);
INSERT INTO CFL (season, team, points) VALUES (2003, 'Tiger-Cats', 2);
INSERT INTO CFL (season, team, points) VALUES (2003, 'Renegades', 14);
INSERT INTO CFL (season, team, points) VALUES (2002, 'Argonauts', 16);
INSERT INTO CFL (season, team, points) VALUES (2002, 'Alouettes', 27);
INSERT INTO CFL (season, team, points) VALUES (2002, 'Tiger-Cats', 15);
INSERT INTO CFL (season, team, points) VALUES (2002, 'Renegades', 10);

What we want:
A table showing each of these 4 teams and their point tables for these 3 seasons.

So what is our pivot row/column? Season.

Step 1/2: We are using season, so we don't need to create our own grouping field, like count, rownum, or running total (sum) for example. That would be easy enough to do, but let's keep this simple.

Step 3: Use "decode" to turn the season row into a column. Take a look at our "sparse" matrix.

SELECT team,
DECODE (season, 2002, points, NULL) Yr2002,
DECODE (season, 2003, points, NULL) Yr2003,
DECODE (season, 2004, points, NULL) Yr2004
FROM (SELECT season, team, points FROM CFL);

TEAM                 YR2002     YR2003     YR2004
---------------- ---------- ---------- ----------
Argonauts 21
Alouettes 28
Tiger-Cats 19
Renegades 10
Argonauts 18
Alouettes 26
Tiger-Cats 2
Renegades 14
Argonauts 16
Alouettes 27
Tiger-Cats 15
Renegades 10


Step 4: Now let's use max to "squash" this into single rows. Don't forget GROUP BY.

SELECT team,
MAX (DECODE (season, 2002, points, NULL)) Yr2002,
MAX (DECODE (season, 2003, points, NULL)) Yr2003,
MAX (DECODE (season, 2004, points, NULL)) Yr2004
FROM (SELECT season, team, points FROM CFL)
GROUP BY team;

TEAM                 YR2002     YR2003     YR2004
---------------- ---------- ---------- ----------
Alouettes 27 26 28
Argonauts 16 18 21
Renegades 10 14 10
Tiger-Cats 15 2 19


Pretty cool, eh? Easy, too.

Notice that the key to this is DECODE. If DECODE is not already part of your toolbelt, I recommend studying up.

http://thinkoracle.blogspot.com/2005/06/decode.html

Ready for a tougher example? Let's look at another Ask Tom:
http://asktom.oracle.com/pls/ask/f?p=4950:8:16663421538065257584::NO::
F4950_P8_DISPLAYID,F4950_P8_CRITERIA:7086279412131


For further study of pivot queries and analytic functions in general, there is an awesome write-up in Chapter 12 of Tom Kyte's "Expert One-on-One Oracle." You'd think I'd get a kickback from Tom Kyte with all the promotion I'm doing, but the honest truth is that no one explains it as well as he.

So, do you understand pivot queries now? No problems?

If so, now you're ready for one of Ask Tom's more complex examples:
http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:6923393629227

Pivot Tables

One final word: don't confuse pivot queries with pivot tables. Pivot tables are a different concept, and have different uses (most typically to fill in missing data). Until I blog about pivot tables, check out these two links:

Jonathan Gennick
http://www.oracle.com/technology/oramag/oracle/02-sep/o52sql.html

Laurent Schneider
http://laurentschneider.blogspot.com/2005/08/pivot-table.html

Oracle WTF

Another great way to learn Oracle is to study mistakes.

Generally people have (misguided) reasons behind their mistakes, and studying them can improve your understanding of Oracle. As such I'd like you to check out a new blog out there dedicated to the most spectacular misunderstandings of Oracle:

http://oracle-wtf.blogspot.com/

It's written by William Robertson, James Padfield, Thai Rices, Adrian Billington and inspired by http://thedailywtf.com.

Another advantage of this blog, as compared to the 20-plus (and growing) other Oracle blogs out there, is you a more likely to get a laugh out of it. Or at least a good cry.

For those of you who enjoy the Dizwell Forum, I have gotten into the habit of posting the occasional WTF there myself:

http://www.phpbbserver.com/phpbb/viewtopic.php?t=310&mforum=dizwellforum
http://www.phpbbserver.com/phpbb/viewtopic.php?t=383&mforum=dizwellforum
http://www.phpbbserver.com/phpbb/viewtopic.php?t=417&mforum=dizwellforum

By the way, "WTF" stands for "What the F." As in "What the F was the developer thinking?" :)

This page is powered by Blogger. Isn't yours?