Discussion:
nio connector configuration
Peter Warren
2009-02-12 03:24:05 UTC
Permalink
Looking for nio configuration tips for 6.0.18...

I have an ajax app that uses a single socket connection for sending
standard http requests and receiving responses, and another socket
connection to listen via comet for messages pushed out by the server.
A comet timeout is generated every 50 seconds, and the server then
closes the client connection. The client immediately reconnects.
This heartbeat lets me know the client is still alive.

In load testing with JMeter, I find that when running somewhere
between 100 & 200 threads (1/2 making regular http requests and 1/2
making comet listen requests that connect and then wait until the
server closes the connection) the comet timeout events get generated
more and more slowly. When they should be generated every 50 seconds,
after only a little over a minute into the load test, the timeouts
slow to sometimes over 60 seconds. When running 100 threads, the
system works fine with the timeouts occurring less than every 60
seconds indefinitely.

I'm not much of a tomcat administrator and am trying to figure out how
to best to tune it for my app. This is what my connector config looks
like now. Basically I'm throwing the kitchen sink at it, and then
planning on fine-tuning it once I find something that makes a
difference.

<Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="1000"
acceptCount="18192"
acceptorThreadCount="2"
acceptorThreadPriority="10"
pollerThreadCount="2"
pollerThreadPriority="10"
maxKeepAliveRequests="-1"
command-line-options="-Dorg.apache.tomcat.util.net.NioSelectorShared=false"
selectorPool.maxSelectors="500"
redirectPort="8443"
enableLookups="false" />

A couple notes:
the acceptorThreadPriority and pollerThreadPriority are set using ints
because I get the following warnings in the catalina log when trying
to use the documented notation:
WARNING: [SetAllPropertiesRule]{Server/Service/Connector} Setting
property 'pollerThreadPriority' to 'java.lang.Thread#MAX_PRIORITY' did
not find a matching property.

I also get the warning when trying to use keepAliveTimeout. Is this
property available for the nio connector?
WARNING: [SetAllPropertiesRule]{Server/Service/Connector} Setting
property 'keepAliveTimeout' to '120000' did not find a matching
property.

I also get the warning when trying to use command-line-options, or am
I supposed to really be setting this property on the command line?
WARNING: [SetAllPropertiesRule]{Server/Service/Connector} Setting
property 'command-line-options' to
'-Dorg.apache.tomcat.util.net.NioSelectorShared=false' did not find a
matching property.

Thanks for any tips,
Peter
Caldarale, Charles R
2009-02-12 04:23:16 UTC
Permalink
Subject: nio connector configuration
I can't answer your real questions, but here's a bit for your minor ones.
the acceptorThreadPriority and pollerThreadPriority are
set using ints because I get the following warnings in the
The documentation isn't meant to imply that you can use the symbolic name of a Java constant in the .xml; the integer values are what's required. The reference to java.lang.Thread is there just to tell you where to look for the legal values.
I also get the warning when trying to use keepAliveTimeout.
Is this property available for the nio connector?
No; it's only listed under the older connector (the one labeled "Standard Implementation" that then somewhat ambiguously refers to HTTP).
'-Dorg.apache.tomcat.util.net.NioSelectorShared=false' did not find a
matching property.
Works for me (at least I don't get any error message) on 6.0.18 running with JDK 6u12 on a Vista 64 box; how are you setting the above property, and what are your running on?

- Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.
Peter Warren
2009-02-12 05:08:07 UTC
Permalink
Thanks for the tips. Very helpful.
Post by Caldarale, Charles R
Post by Peter Warren
I also get the warning when trying to use keepAliveTimeout.
Is this property available for the nio connector?
No; it's only listed under the older connector (the one labeled "Standard Implementation" that then somewhat ambiguously refers to HTTP).
I suspected as much, but got confused when I used the acceptCount
property and tomcat didn't complain. acceptCount is only listed under
the "standard implementation" as well, so I expected a warning.
Post by Caldarale, Charles R
Post by Peter Warren
'-Dorg.apache.tomcat.util.net.NioSelectorShared=false' did not find a
matching property.
Works for me (at least I don't get any error message) on 6.0.18 running with JDK 6u12 on a Vista 64 box; how are you setting the above property, and what are your running on?
I was setting it as a property in the connector config. Maybe that
was silly of me, but I thought maybe all the properties were localized
in the connector. I just tried it as a command-line option and it
seemed to work.

Interesting side note, with
"-Dorg.apache.tomcat.util.net.NioSelectorShared=false" set and
selectorPool.maxSelectors="500". My server code starts generating
comet END events after about 30 threads start and only about 10
seconds into the test.

Peter
Peter Warren
2009-02-13 09:16:46 UTC
Permalink
I'm trying to figure out how best to configure nio so that my comet
timeout events get generated in a timely manner. I have the comet
events set to generate a timeout every 50 seconds. Works fine with
few users. Under a moderate but reasonable load the timeout gets
generated on average every 113 seconds. My configuration tweaks
haven't yielded any noticeable changes (see below).

Test results...

Background:
- using JMeter
- 300 threads executing normal http requests, averaging ~9.8 requests/second.
- 300 threads executing comet requests that simply wait for the server
to close the connection every 50 seconds, averaging ~2.6
requests/second.
- server is ubuntu 8.10 running tomcat 6.0.18.
- server is not cpu constrained, averaging about 8-12% cpu
- server doesn't seem to be memory constrained. top shows 80% of
memory after hours of test (machine has 512MB physical memory and
tomcat has a max heap set to 384MB)
- network latency isn't a problem

I ran 2 tests with different configurations for the nio connector: 1
test with bare-bones settings, and 1 test with everything that seemed
like it might make a difference.

<Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol"
redirectPort="8443"
enableLookups="false" />

Ran for 3+ hours.
8-12% cpu.
12.4 requests/second.
comet requests: average response time 112 secs, min 21 secs, max 179 secs

<Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="1000"
minSpareThreads="200"
acceptorThreadCount="20"
acceptorThreadPriority="10"
pollerThreadCount="20"
pollerThreadPriority="10"
redirectPort="8443"
enableLookups="false" />

Ran for 1 1/2 hours.
8-12% cpu.
12.2 requests/second.
comet requests: average response time 113 secs, min 50 secs, max 133 secs

So how can I get my comet timeouts generated at close to 50 secs under load?

I thought maybe the poller thread priority was too low (does the
poller thread generate the timeouts?), but setting its priority to max
didn't change anything.

Just to make sure I wasn't doing anything dumb in my client code, I
replaced my event() method with the one below and still got the same
disparity in comet timeouts, ranging from 50 to 120 secs:

public void event(CometEvent event) throws IOException, ServletException {
HttpServletRequest request = event.getHttpServletRequest();
if (event.getEventType() == CometEvent.EventType.BEGIN) {
event.setTimeout(50000);
} else if (event.getEventType() == CometEvent.EventType.ERROR) {
event.close();
} else if (event.getEventType() == CometEvent.EventType.END) {
event.close();
} else if (event.getEventType() == CometEvent.EventType.READ) {
InputStream is = request.getInputStream();
byte[] buf = new byte[512];
do {
is.read(buf); // can throw an IOException
} while (is.available() > 0);
}
}

I just checked the priority of the thread issuing the comet timeout
event and its priority is 5. I have both the acceptor and poller
thread priorities set to 10. How can I bump up the priority of the
thread that issues the timeout events (in this case named
"http-80-exec-1")?

Thanks for any ideas,
Peter
Peter Warren
2009-02-19 20:28:28 UTC
Permalink
Sorry to bump this thread. I'm willing to pay for some assistance if
anyone's interested in helping. I'm trying to figure out 2 problems
when running my system under a light-moderate load test:

1) why do my comet timeout events not get generated on time (supposed
to be every 50 seconds, averaging 56s, with many refused connections
skewing the average down, max 85s)?

2) why is tomcat refusing connections under what seems to be a reasonable load?

I'm happy to do more legwork on my own -- just looking for some
pointers here. Can anyone help me out?

Looking at JConsole, 2 items that look suspicious: 1) blocked
http-acceptor threads, and 2) block-poller and client-poller threads
that show a high number of blocks (see below). Also, the non-comet
http requests are returning quickly, averaging ~150ms, so it seems to
be only my comet requests that are having issues.

Running a load test with 600 total client threads averaging 14
requests/sec. 300 threads making normal http requests, 300 threads
making comet requests that wait 50 seconds for a server timeout.

Tomcat 6.0.18 on Windows XP.

<Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="1000"
acceptorThreadCount="5"
acceptorThreadPriority="10"
pollerThreadCount="5"
pollerThreadPriority="10"
redirectPort="8443"
enableLookups="false" />

after 1/2 hour running:

normal http requests:
20000 samples, averaging 150 ms, ~.5% error

comet requests:
10000 samples, averaging 56s, ~3.5% error

Information from JConsole:

CPU avg: 5-10%
threads: stable @ ~300
memory: stable @ ~130MB

Thread status from JConsole of the http-acceptor, block-poller, and
client-poller threads:

Name: http-80-Acceptor-0
State: BLOCKED on ***@17a82f1 owned by: http-80-Acceptor-4
Total blocked: 132 Total waited: 0

Stack trace:
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)

-----

Name: http-80-Acceptor-1
State: BLOCKED on ***@17a82f1 owned by: http-80-Acceptor-4
Total blocked: 129 Total waited: 0

Stack trace:
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)

-----

Name: http-80-Acceptor-2
State: BLOCKED on ***@17a82f1 owned by: http-80-Acceptor-4
Total blocked: 122 Total waited: 0

Stack trace:
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)

-----

Name: http-80-Acceptor-3
State: BLOCKED on ***@17a82f1 owned by: http-80-Acceptor-0
Total blocked: 166 Total waited: 0

Stack trace:
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)

-----

Name: http-80-Acceptor-4
State: BLOCKED on ***@17a82f1 owned by: http-80-Acceptor-0
Total blocked: 133 Total waited: 0

Stack trace:
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)

-----

Name: http-80-ClientPoller
State: RUNNABLE
Total blocked: 17,950 Total waited: 5

Stack trace:
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(Unknown Source)
sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(Unknown Source)
sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
- locked sun.nio.ch.Util$***@f7b8fc
- locked java.util.Collections$***@195afdb
- locked ***@c56236
sun.nio.ch.SelectorImpl.select(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1473)
java.lang.Thread.run(Unknown Source)

-----

Name: NioBlockingSelector.BlockPoller-1
State: RUNNABLE
Total blocked: 10,645 Total waited: 0

Stack trace:
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(Unknown Source)
sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(Unknown Source)
sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
- locked sun.nio.ch.Util$***@b2ee9a
- locked java.util.Collections$***@14eda77
- locked ***@1e8bb4c
sun.nio.ch.SelectorImpl.select(Unknown Source)
org.apache.tomcat.util.net.NioBlockingSelector$BlockPoller.run(NioBlockingSelector.java:305)

Thanks for any help,
Peter
Filip Hanik - Dev Lists
2009-02-20 01:02:38 UTC
Permalink
peter, if you post your test code packaged in such a way that who ever
helps you doesn't have to reverse engineer your app to setup the test
case and test it locally, then you most likely wont have to pay anyone
to help you.
The more effort you provide in providing information to the list,
including a reproducible test case, then someone, possibly me, will help
you out for free.
However, if it would take someone more than 10 minutes and they still
can't setup your example, then no one will try it for free.

Even if you paid someone, they would ask for the same info, however,
they would charge you for the time it took to setup and reproduce your
example.
and that is the difference between free support and paid support.
with free support, you do the work, someone helps you
with paid support, you pay someone to do the work that you could have
done, the help and the answer in the end will most likely be the same.

If you want paid support or consulting, you can get that from many
different companies, including the one I work for, www.springsource.com

Filip
Post by Peter Warren
Sorry to bump this thread. I'm willing to pay for some assistance if
anyone's interested in helping. I'm trying to figure out 2 problems
1) why do my comet timeout events not get generated on time (supposed
to be every 50 seconds, averaging 56s, with many refused connections
skewing the average down, max 85s)?
2) why is tomcat refusing connections under what seems to be a reasonable load?
I'm happy to do more legwork on my own -- just looking for some
pointers here. Can anyone help me out?
Looking at JConsole, 2 items that look suspicious: 1) blocked
http-acceptor threads, and 2) block-poller and client-poller threads
that show a high number of blocks (see below). Also, the non-comet
http requests are returning quickly, averaging ~150ms, so it seems to
be only my comet requests that are having issues.
Running a load test with 600 total client threads averaging 14
requests/sec. 300 threads making normal http requests, 300 threads
making comet requests that wait 50 seconds for a server timeout.
Tomcat 6.0.18 on Windows XP.
<Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="1000"
acceptorThreadCount="5"
acceptorThreadPriority="10"
pollerThreadCount="5"
pollerThreadPriority="10"
redirectPort="8443"
enableLookups="false" />
20000 samples, averaging 150 ms, ~.5% error
10000 samples, averaging 56s, ~3.5% error
CPU avg: 5-10%
Thread status from JConsole of the http-acceptor, block-poller, and
Name: http-80-Acceptor-0
Total blocked: 132 Total waited: 0
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)
-----
Name: http-80-Acceptor-1
Total blocked: 129 Total waited: 0
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)
-----
Name: http-80-Acceptor-2
Total blocked: 122 Total waited: 0
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)
-----
Name: http-80-Acceptor-3
Total blocked: 166 Total waited: 0
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)
-----
Name: http-80-Acceptor-4
Total blocked: 133 Total waited: 0
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
java.lang.Thread.run(Unknown Source)
-----
Name: http-80-ClientPoller
State: RUNNABLE
Total blocked: 17,950 Total waited: 5
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(Unknown Source)
sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(Unknown Source)
sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
sun.nio.ch.SelectorImpl.select(Unknown Source)
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1473)
java.lang.Thread.run(Unknown Source)
-----
Name: NioBlockingSelector.BlockPoller-1
State: RUNNABLE
Total blocked: 10,645 Total waited: 0
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(Unknown Source)
sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(Unknown Source)
sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
sun.nio.ch.SelectorImpl.select(Unknown Source)
org.apache.tomcat.util.net.NioBlockingSelector$BlockPoller.run(NioBlockingSelector.java:305)
Thanks for any help,
Peter
---------------------------------------------------------------------
otismo
2009-02-27 19:11:41 UTC
Permalink
Thanks for the response, Filip. Hopefully this is more helpful...

I put a war at http://www.nomad.org/test.war containing my web app, the
source, and my jmeter test plan.

My question: why are comet timeouts getting generated substantially behind
the timeout setting?

Is it because I have incorrectly configured tomcat? Is there something
wrong with my test? Can anyone else confirm this behavior?

It seems as though the normal (non-comet) http requests are taking priority
over the comet requests.

The test is very simple. One set of threads sends non-comet http requests
every 10 seconds. Another set of threads sends comet requests with a single
byte in the body. The comet servlet sets a comet timeout of 10 seconds,
reads the request body, and then closes the connection on receiving the
comet timeout event. On close of the connection, the comet test threads
then send another comet request.

I have JMeter set to start 100 threads for the http thread group and 100
threads for the comet thread group, ramping up at 1 thread per second.

A fifteen minute test shows:
http requests: 9750 samples
http response time: avg 109ms, min 66ms, max 3699ms
http errors: 0%
http throughput: 9.5 requests/second
comet requests: 1942 samples
comet response time: avg 50149ms, min 10353ms, max 120876ms (tcp timeout is
set to 120000)
comet errors: 0%
comet throughput: 1.9 requests/second
cpu use is minimal (1-4%)

There are no errors in the catalina log.

I also added timing code to the test servlet to confirm that JMeter's
measurements are accurate and found they are. After confirming the
measurements, I removed the timing code.

I noticed on even short tests the http requests predominate, even though
there should be roughly the same # of http requests as comet requests (1
request/10 seconds/thread and there are 100 threads for each thread group,
http and comet).

My NIO configuration is:
<Connector port="80"
protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="1000"
acceptorThreadCount="2"
acceptorThreadPriority="10"
pollerThreadCount="2"
pollerThreadPriority="10"
redirectPort="8443"
enableLookups="false" />

os: ubuntu 8.10 (although also observed same behavior on Windows XP SP 3)
tomcat 6.0.18

(Note: the following are also in the war bundle referenced at top)
The client TCP request looks like this (without the ####s):
####
POST /test/cometTest HTTP/1.1
Host: 173.45.237.215
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1b2)
Gecko/20081201 Firefox/3.1b2
Connection: keep-alive
Content-Type: text/plain
Content-Length: 1

5
####

The http test servlet's doGet method looks like this:
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws IOException {
response.getWriter().println("response");
}

The comet test servlet's event method looks like this:
public void event(CometEvent event) throws IOException, ServletException
{
HttpServletRequest request = event.getHttpServletRequest();
if (event.getEventType() == CometEvent.EventType.BEGIN) {
event.setTimeout(10000);
} else if (event.getEventType() == CometEvent.EventType.ERROR) {
event.close();
} else if (event.getEventType() == CometEvent.EventType.END) {
event.close();
} else if (event.getEventType() == CometEvent.EventType.READ) {
InputStream is = request.getInputStream();
byte[] buf = new byte[512];
do {
is.read(buf);
} while (is.available() > 0);
}
}

Thanks for any help,
Peter
--
View this message in context: http://www.nabble.com/nio-connector-configuration-tp21969270p22252460.html
Sent from the Tomcat - User mailing list archive at Nabble.com.
Filip Hanik - Dev Lists
2009-03-06 02:06:17 UTC
Permalink
hi Peter,
I ran your jmeter test and I get an average request time for Comet to be
13.5 seconds.
I'm running this on what will be 6.0.19, meaning 6.0.x/trunk
With a 10second timeout, you wont get timed out in exactly 10 seconds.
timeout are of absolutely lowest priority.
If there is request data coming in for the poller, then that will get a
preference. Timeouts happen when the poller thread is free, and the time
has passed.
But 13.5 sounds pretty reasonable in this case
Filip
Post by otismo
Thanks for the response, Filip. Hopefully this is more helpful...
I put a war at http://www.nomad.org/test.war containing my web app, the
source, and my jmeter test plan.
My question: why are comet timeouts getting generated substantially behind
the timeout setting?
Is it because I have incorrectly configured tomcat? Is there something
wrong with my test? Can anyone else confirm this behavior?
It seems as though the normal (non-comet) http requests are taking priority
over the comet requests.
The test is very simple. One set of threads sends non-comet http requests
every 10 seconds. Another set of threads sends comet requests with a single
byte in the body. The comet servlet sets a comet timeout of 10 seconds,
reads the request body, and then closes the connection on receiving the
comet timeout event. On close of the connection, the comet test threads
then send another comet request.
I have JMeter set to start 100 threads for the http thread group and 100
threads for the comet thread group, ramping up at 1 thread per second.
http requests: 9750 samples
http response time: avg 109ms, min 66ms, max 3699ms
http errors: 0%
http throughput: 9.5 requests/second
comet requests: 1942 samples
comet response time: avg 50149ms, min 10353ms, max 120876ms (tcp timeout is
set to 120000)
comet errors: 0%
comet throughput: 1.9 requests/second
cpu use is minimal (1-4%)
There are no errors in the catalina log.
I also added timing code to the test servlet to confirm that JMeter's
measurements are accurate and found they are. After confirming the
measurements, I removed the timing code.
I noticed on even short tests the http requests predominate, even though
there should be roughly the same # of http requests as comet requests (1
request/10 seconds/thread and there are 100 threads for each thread group,
http and comet).
<Connector port="80"
protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="1000"
acceptorThreadCount="2"
acceptorThreadPriority="10"
pollerThreadCount="2"
pollerThreadPriority="10"
redirectPort="8443"
enableLookups="false" />
os: ubuntu 8.10 (although also observed same behavior on Windows XP SP 3)
tomcat 6.0.18
(Note: the following are also in the war bundle referenced at top)
####
POST /test/cometTest HTTP/1.1
Host: 173.45.237.215
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1b2)
Gecko/20081201 Firefox/3.1b2
Connection: keep-alive
Content-Type: text/plain
Content-Length: 1
5
####
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws IOException {
response.getWriter().println("response");
}
public void event(CometEvent event) throws IOException, ServletException {
HttpServletRequest request = event.getHttpServletRequest();
if (event.getEventType() == CometEvent.EventType.BEGIN) {
event.setTimeout(10000);
} else if (event.getEventType() == CometEvent.EventType.ERROR) {
event.close();
} else if (event.getEventType() == CometEvent.EventType.END) {
event.close();
} else if (event.getEventType() == CometEvent.EventType.READ) {
InputStream is = request.getInputStream();
byte[] buf = new byte[512];
do {
is.read(buf);
} while (is.available() > 0);
}
}
Thanks for any help,
Peter
otismo
2009-03-06 19:04:55 UTC
Permalink
Thanks for checking it out, Filip.
Post by Filip Hanik - Dev Lists
I'm running this on what will be 6.0.19, meaning 6.0.x/trunk
Yes, running from the trunk yields very different #s. Looking into it more,
6.0.18 didn't honor the pollerThreadCount setting.

results (all tests were run for around 3000 samples):
6.0.18:
with 2 acceptors and 2 pollers set (really only 1 poller was used because
6.0.18 didn't honor the poller setting):
avg 20s, min 10s, max 60s

6.0.19:
2 acceptors, 2 pollers:
avg 15s, min 10s, max 32s

10 acceptors, 10 pollers:
avg 13s, min 10s, max 53s

50 acceptors, 50 pollers:
avg 11s, min 10s, max 27s

1 acceptor, 50 pollers:
avg 11s, min 10s, max 32s

So it seems that in my app, where timely timeouts are important, raising the
number of pollers helps.
Post by Filip Hanik - Dev Lists
Timeouts happen when the poller thread is free, and the time has passed.
Ok, so the results above make sense because having more poller threads will
increase the likelihood that one will be free and that my timeout will get
serviced more quickly.

What I don't understand is the connection between non-comet http requests
and comet requests. Running the same test above, without the non-comet http
requests (setting the # of threads in the HttpTest thread group to 0, and
upping the comet threads to 200) on 6.0.18 I get:
avg 10.3s, min 10.0s, max 13s

A non-comet request shouldn't be tying up a poller thread, should it? So
why would non-comet requests delay the delivery of comet timeouts?

Peter
--
View this message in context: http://www.nabble.com/nio-connector-configuration-tp21969270p22378593.html
Sent from the Tomcat - User mailing list archive at Nabble.com.
Filip Hanik - Dev Lists
2009-03-06 20:49:00 UTC
Permalink
Post by otismo
Thanks for checking it out, Filip.
Post by Filip Hanik - Dev Lists
I'm running this on what will be 6.0.19, meaning 6.0.x/trunk
Yes, running from the trunk yields very different #s. Looking into it more,
6.0.18 didn't honor the pollerThreadCount setting.
with 2 acceptors and 2 pollers set (really only 1 poller was used because
avg 20s, min 10s, max 60s
avg 15s, min 10s, max 32s
avg 13s, min 10s, max 53s
avg 11s, min 10s, max 27s
avg 11s, min 10s, max 32s
So it seems that in my app, where timely timeouts are important, raising the
number of pollers helps.
actually, instead of changing poller count, try to reduce the
selectorTimeout="50", set it to 50ms
default is one second.
Post by otismo
Post by Filip Hanik - Dev Lists
Timeouts happen when the poller thread is free, and the time has passed.
Ok, so the results above make sense because having more poller threads will
increase the likelihood that one will be free and that my timeout will get
serviced more quickly.
What I don't understand is the connection between non-comet http requests
and comet requests. Running the same test above, without the non-comet http
requests (setting the # of threads in the HttpTest thread group to 0, and
avg 10.3s, min 10.0s, max 13s
A non-comet request shouldn't be tying up a poller thread, should it? So
why would non-comet requests delay the delivery of comet timeouts?
no it doesn't, but here is the logic
when the poller thread wakes up from a select(), it checks to see if
were any events, or if the select timed out
if there were no events, and select() timed out, then it checks comet
connections for their timeout status.
You see, checking timeout means the poller spends cpu cycles not
polling, and that can affect other running connections.
so a timeout is a lower priority.

but reducing selectorTimeout, should yield very different values
Post by otismo
Peter
Loading...