Discussion:
Too many connections in keepalive state in jk threadpool
Beier Michael
2012-03-02 10:19:11 UTC
Permalink
Hi all,

we're running tomcat 7.0.23 on sun jdk 1.6.0_29, connected via ajp to httpd 2.2.21 using mod_jk 1.2.32.

I observed the behavior, that tomcat keeps threads in its ajp pool in keepalive state, regardless of which timeouts (connectionTimeout and keepAliveTimeout) are configured in tomcat.
I tested three connector configurations and with all I see connections in tomcat server status where the "Time" value amounts up to several million milliseconds, which is more than configured in connectionTimeout/keepAliveTimeout.
This results in having 60-80 percent of the thread pool being in state "keepAlive".

1)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" connectionTimeout="300000" />
2)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" />
3)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false" />

In mod_jk the connection_pool_timeout is set to the same value as connectionTimeout (only in seconds, not milliseconds).
I verified that the values are set correctly querying the parameters via JMX.

How can I avoid having so many threads in keepalive state - I don't have any idea at the moment and can't see that there is an error in my configuration.
Best regards,
Michael
Team SIS OIOAW (Web Basis)

EnBW Systeme Infrastruktur Support GmbH
Durlacher Allee 93
76131 Karlsruhe
Fon: +49 (7 21) 63 - 14545
Fax: +49 (7 21) 63 - 15099
mailto:***@enbw.com
EnBW Systeme Infrastruktur Support GmbH
Sitz der Gesellschaft: Karlsruhe
Handelsregister: Amtsgericht Mannheim ­ HRB 108550
Vorsitzender des Aufsichtsrats: Dr. Bernhard Beck
Geschäftsführer: Jochen Adenau, Hans-Günther Meier
André Warnier
2012-03-02 12:01:26 UTC
Permalink
Post by Beier Michael
Hi all,
we're running tomcat 7.0.23 on sun jdk 1.6.0_29, connected via ajp to httpd 2.2.21 using mod_jk 1.2.32.
I observed the behavior, that tomcat keeps threads in its ajp pool in keepalive state, regardless of which timeouts (connectionTimeout and keepAliveTimeout) are configured in tomcat.
I tested three connector configurations and with all I see connections in tomcat server status where the "Time" value amounts up to several million milliseconds, which is more than configured in connectionTimeout/keepAliveTimeout.
This results in having 60-80 percent of the thread pool being in state "keepAlive".
1)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" connectionTimeout="300000" />
2)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" />
3)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false" />
In mod_jk the connection_pool_timeout is set to the same value as connectionTimeout (only in seconds, not milliseconds).
I verified that the values are set correctly querying the parameters via JMX.
How can I avoid having so many threads in keepalive state - I don't have any idea at the moment and can't see that there is an error in my configuration.
Before discussing this, I find it useful to review the basics, such as in :

http://en.wikipedia.org/wiki/HTTP_persistent_connection
and
http://tomcat.apache.org/connectors-doc/generic_howto/timeouts.html

In other words, at the level of your front-end webserver (which I suppose you have, since
you are talking about mod_jk and AJP), do you really need a long KeepAliveTimeout ?
(and similarly at the level of your Tomcat <Connector>'s above).

As per the documentation :

connectionTimeout

The number of milliseconds this Connector will wait, after accepting a connection, for the
request URI line to be presented. The default value is 60000 (i.e. 60 seconds).

keepAliveTimeout

The number of milliseconds this Connector will wait for another AJP request before closing
the connection. The default value is to use the value that has been set for the
connectionTimeout attribute.

In other words,
- connectionTimeout defaults to 60 seconds
- if you do not specify either one of them, then they both default to 60 seconds.
- if you do specify connectionTimeout and not KeepAliveTimeout, then KeepAliveTimeout
defaults to the same value as connectionTimeout.
- your value above for KeepAliveTimeout (300000) means 5 minutes

Do you really want one Tomcat thread to wait for 5 minutes doing nothing, just in case the
browser would decide to send another request on the same connection ?
And do you really want, when a browser creates its initial TCP connection to your
webserver, to give it 60 seconds (or 5 mintes !) before it even starts sending its HTTP
request on that connection ?
Beier Michael
2012-03-02 13:24:26 UTC
Permalink
Hi,

all the points you've mentioned are important and have been considered. 5 minutes timeout for connection / keepAlive is a very long time, but this is OK for our web apps running in our intranet.
But at the moment I'd be quite happy, if tomcat would make use of the defined timeouts and terminate the threads, that have been in keepalive state for more than 5 minutes.

But this does not happen!

So my first goal is, to make tomcat respect the timeouts I define.
The second goal then might be fine tuning the timeouts.

Best regards,
Michael

-----Ursprüngliche Nachricht-----
Von: André Warnier [mailto:***@ice-sa.com]
Gesendet: Freitag, 2. März 2012 13:01
An: Tomcat Users List
Betreff: Re: Too many connections in keepalive state in jk threadpool
Post by Beier Michael
Hi all,
we're running tomcat 7.0.23 on sun jdk 1.6.0_29, connected via ajp to httpd 2.2.21 using mod_jk 1.2.32.
I observed the behavior, that tomcat keeps threads in its ajp pool in keepalive state, regardless of which timeouts (connectionTimeout and keepAliveTimeout) are configured in tomcat.
I tested three connector configurations and with all I see connections in tomcat server status where the "Time" value amounts up to several million milliseconds, which is more than configured in connectionTimeout/keepAliveTimeout.
This results in having 60-80 percent of the thread pool being in state "keepAlive".
1)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" connectionTimeout="300000" />
2)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" />
3)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343"
tomcatAuthentication="false" />
In mod_jk the connection_pool_timeout is set to the same value as connectionTimeout (only in seconds, not milliseconds).
I verified that the values are set correctly querying the parameters via JMX.
How can I avoid having so many threads in keepalive state - I don't have any idea at the moment and can't see that there is an error in my configuration.
Before discussing this, I find it useful to review the basics, such as in :

http://en.wikipedia.org/wiki/HTTP_persistent_connection
and
http://tomcat.apache.org/connectors-doc/generic_howto/timeouts.html

In other words, at the level of your front-end webserver (which I suppose you have, since you are talking about mod_jk and AJP), do you really need a long KeepAliveTimeout ?
(and similarly at the level of your Tomcat <Connector>'s above).

As per the documentation :

connectionTimeout

The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented. The default value is 60000 (i.e. 60 seconds).

keepAliveTimeout

The number of milliseconds this Connector will wait for another AJP request before closing the connection. The default value is to use the value that has been set for the connectionTimeout attribute.

In other words,
- connectionTimeout defaults to 60 seconds
- if you do not specify either one of them, then they both default to 60 seconds.
- if you do specify connectionTimeout and not KeepAliveTimeout, then KeepAliveTimeout defaults to the same value as connectionTimeout.
- your value above for KeepAliveTimeout (300000) means 5 minutes

Do you really want one Tomcat thread to wait for 5 minutes doing nothing, just in case the browser would decide to send another request on the same connection ?
And do you really want, when a browser creates its initial TCP connection to your webserver, to give it 60 seconds (or 5 mintes !) before it even starts sending its HTTP request on that connection ?




---------------------------------------------------------------------
To unsubscribe, e-mail: users-***@tomcat.apache.org
For additional commands, e-mail: users-***@tomcat.apache.org
André Warnier
2012-03-02 17:53:28 UTC
Permalink
Hi.

The recommended way of replying to messages on this list, is to write your replies below
the comment/question to which it relates.
It makes it much easier to follow the flow of the conversation.
Post by Beier Michael
-----Ursprüngliche Nachricht-----
Gesendet: Freitag, 2. März 2012 13:01
An: Tomcat Users List
Betreff: Re: Too many connections in keepalive state in jk threadpool
Post by Beier Michael
Hi all,
we're running tomcat 7.0.23 on sun jdk 1.6.0_29, connected via ajp to httpd 2.2.21 using mod_jk 1.2.32.
I observed the behavior, that tomcat keeps threads in its ajp pool in keepalive state, regardless of which timeouts (connectionTimeout and keepAliveTimeout) are configured in tomcat.
I tested three connector configurations and with all I see connections in tomcat server status where the "Time" value amounts up to several million milliseconds, which is more than configured in connectionTimeout/keepAliveTimeout.
This results in having 60-80 percent of the thread pool being in state "keepAlive".
1)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" connectionTimeout="300000" />
2)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" />
3)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343"
tomcatAuthentication="false" />
In mod_jk the connection_pool_timeout is set to the same value as connectionTimeout (only in seconds, not milliseconds).
I verified that the values are set correctly querying the parameters via JMX.
How can I avoid having so many threads in keepalive state - I don't have any idea at the moment and can't see that there is an error in my configuration.
http://en.wikipedia.org/wiki/HTTP_persistent_connection
and
http://tomcat.apache.org/connectors-doc/generic_howto/timeouts.html
In other words, at the level of your front-end webserver (which I suppose you have, since you are talking about mod_jk and AJP), do you really need a long KeepAliveTimeout ?
(and similarly at the level of your Tomcat <Connector>'s above).
connectionTimeout
The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented. The default value is 60000 (i.e. 60 seconds).
keepAliveTimeout
The number of milliseconds this Connector will wait for another AJP request before closing the connection. The default value is to use the value that has been set for the connectionTimeout attribute.
In other words,
- connectionTimeout defaults to 60 seconds
- if you do not specify either one of them, then they both default to 60 seconds.
- if you do specify connectionTimeout and not KeepAliveTimeout, then KeepAliveTimeout defaults to the same value as connectionTimeout.
- your value above for KeepAliveTimeout (300000) means 5 minutes
Do you really want one Tomcat thread to wait for 5 minutes doing nothing, just in case the browser would decide to send another request on the same connection ?
And do you really want, when a browser creates its initial TCP connection to your webserver, to give it 60 seconds (or 5 mintes !) before it even starts sending its HTTP request on that connection ?
Hi,
all the points you've mentioned are important and have been considered. 5 minutes
timeout for connection / keepAlive is a very long time, but this is OK for our web apps
running in our intranet.
It may be ok for you, but it may also be the reason why you have processes/threads which
remain alive and are blocking connections.

I do not know the deep details of how mod_jk works, but :
- the browser make a connection to the front-end server
- Apache httpd accepts the connection and passes it to a httpd child process, for request
processing
- the browser sends a HTTP request over that connection, requesting Keep-Alive
- the front-end server's child processes the request. In the process of doing this,
mod_jk establishes a connection to the back-end Tomcat, and passes the request to Tomcat
over that connection
- in Tomcat, a thread starts to process the request
- in Tomcat, the thread sends the response and finishes to process that request
- but because the connection is Keep-Alive, it does not close the connection (from
mod_jk), and keeps waiting for more requests
- in the meantime, the Apache child who processed the request (sending it through mod_jk
to Tomcat) will not close its connection to the browser either, and will probably keep its
connection to Tomcat open also
- in the meantime, another browser connects to the server, which will also result in an
Apache front-end child waiting for 5 minutes doing nothing, and blocking a connection to
Tomcat and a thread in Tomcat..
And so on.

It is probably more complex than that, since mod_jk will use a pool of connections to
Tomcat, and try to keep them alive and re-use them, for efficiency reasons.
So it is a bit complicated to determine which KeepAlive "wins" here, and what in the end
is causing these Tomcat threads to remain when they shouldn't.
Post by Beier Michael
But at the moment I'd be quite happy, if tomcat would make use of the defined timeouts
and terminate the threads, that have been in keepalive state for more than 5 minutes.
Post by Beier Michael
But this does not happen!
So my first goal is, to make tomcat respect the timeouts I define.
The second goal then might be fine tuning the timeouts.
My point was : first set your timeouts to a reasonable value (2-3 seconds for example),
and then check if you still have a bunch of threads in Tomcat doing nothing.
If you still do, then you may have a problem worth investigating further.
But if you don't, then why make your life complicated and look for problems where there
aren't any ?
Beier Michael
2012-03-02 20:43:32 UTC
Permalink
Post by Beier Michael
-----Ursprüngliche Nachricht-----
Gesendet: Freitag, 2. März 2012 18:53
An: Tomcat Users List
Betreff: Re: AW: Too many connections in keepalive state in jk
threadpool
Hi.
The recommended way of replying to messages on this list, is
to write your replies below
the comment/question to which it relates.
It makes it much easier to follow the flow of the conversation.
Post by Beier Michael
-----Ursprüngliche Nachricht-----
Gesendet: Freitag, 2. März 2012 13:01
An: Tomcat Users List
Betreff: Re: Too many connections in keepalive state in jk threadpool
Post by Beier Michael
Hi all,
we're running tomcat 7.0.23 on sun jdk 1.6.0_29, connected
via ajp to httpd 2.2.21 using mod_jk 1.2.32.
Post by Beier Michael
Post by Beier Michael
I observed the behavior, that tomcat keeps threads in its
ajp pool in keepalive state, regardless of which timeouts
(connectionTimeout and keepAliveTimeout) are configured in tomcat.
Post by Beier Michael
Post by Beier Michael
I tested three connector configurations and with all I see
connections in tomcat server status where the "Time" value
amounts up to several million milliseconds, which is more than
configured in connectionTimeout/keepAliveTimeout.
Post by Beier Michael
Post by Beier Michael
This results in having 60-80 percent of the thread pool
being in state "keepAlive".
Post by Beier Michael
Post by Beier Michael
1)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343"
tomcatAuthentication="false"
Post by Beier Michael
Post by Beier Michael
keepAliveTimeout="300000"
connectionTimeout="300000" />
Post by Beier Michael
Post by Beier Michael
2)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343"
tomcatAuthentication="false"
Post by Beier Michael
Post by Beier Michael
keepAliveTimeout="300000" />
3)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343"
tomcatAuthentication="false" />
In mod_jk the connection_pool_timeout is set to the same
value as connectionTimeout (only in seconds, not milliseconds).
Post by Beier Michael
Post by Beier Michael
I verified that the values are set correctly querying the
parameters via JMX.
Post by Beier Michael
Post by Beier Michael
How can I avoid having so many threads in keepalive state -
I don't have any idea at the moment and can't see that there
is an error in my configuration.
Post by Beier Michael
Before discussing this, I find it useful to review the
http://en.wikipedia.org/wiki/HTTP_persistent_connection
and
http://tomcat.apache.org/connectors-doc/generic_howto/timeouts.html
In other words, at the level of your front-end webserver
(which I suppose you have, since you are talking about mod_jk
and AJP), do you really need a long KeepAliveTimeout ?
Post by Beier Michael
(and similarly at the level of your Tomcat <Connector>'s above).
connectionTimeout
The number of milliseconds this Connector will wait, after
accepting a connection, for the request URI line to be
presented. The default value is 60000 (i.e. 60 seconds).
Post by Beier Michael
keepAliveTimeout
The number of milliseconds this Connector will wait for
another AJP request before closing the connection. The default
value is to use the value that has been set for the
connectionTimeout attribute.
Post by Beier Michael
In other words,
- connectionTimeout defaults to 60 seconds
- if you do not specify either one of them, then they both
default to 60 seconds.
Post by Beier Michael
- if you do specify connectionTimeout and not
KeepAliveTimeout, then KeepAliveTimeout defaults to the same
value as connectionTimeout.
Post by Beier Michael
- your value above for KeepAliveTimeout (300000) means 5 minutes
Do you really want one Tomcat thread to wait for 5 minutes
doing nothing, just in case the browser would decide to send
another request on the same connection ?
Post by Beier Michael
And do you really want, when a browser creates its initial
TCP connection to your webserver, to give it 60 seconds (or 5
mintes !) before it even starts sending its HTTP request on
that connection ?
Post by Beier Michael
Hi,
all the points you've mentioned are important and have been
considered. 5 minutes
timeout for connection / keepAlive is a very long time, but
this is OK for our web apps
running in our intranet.
It may be ok for you, but it may also be the reason why you
have processes/threads which
remain alive and are blocking connections.
That's a very sophisticated thesis. I configured a 300 second
KeepAliveTimeout and find threads in keepalive state that are
more than 1000 seconds old. Why should this behaviour change,
if I reduce the configured timeout? But .. I'll try ..
Post by Beier Michael
- the browser make a connection to the front-end server
- Apache httpd accepts the connection and passes it to a httpd
child process, for request
processing
- the browser sends a HTTP request over that connection,
requesting Keep-Alive
- the front-end server's child processes the request. In the
process of doing this,
mod_jk establishes a connection to the back-end Tomcat, and
passes the request to Tomcat
over that connection
- in Tomcat, a thread starts to process the request
- in Tomcat, the thread sends the response and finishes to
process that request
- but because the connection is Keep-Alive, it does not close
the connection (from
mod_jk), and keeps waiting for more requests
- in the meantime, the Apache child who processed the request
(sending it through mod_jk
to Tomcat) will not close its connection to the browser
either, and will probably keep its
connection to Tomcat open also
- in the meantime, another browser connects to the server,
which will also result in an
Apache front-end child waiting for 5 minutes doing nothing,
and blocking a connection to
Tomcat and a thread in Tomcat..
And so on.
It is probably more complex than that, since mod_jk will use a
pool of connections to
Tomcat, and try to keep them alive and re-use them, for
efficiency reasons.
So it is a bit complicated to determine which KeepAlive "wins"
here, and what in the end
is causing these Tomcat threads to remain when they shouldn't.
Post by Beier Michael
But at the moment I'd be quite happy, if tomcat would make
use of the defined timeouts
and terminate the threads, that have been in keepalive state
for more than 5 minutes.
But this does not happen!
So my first goal is, to make tomcat respect the timeouts I define.
The second goal then might be fine tuning the timeouts.
My point was : first set your timeouts to a reasonable value
(2-3 seconds for example),
and then check if you still have a bunch of threads in Tomcat
doing nothing.
If you still do, then you may have a problem worth
investigating further.
But if you don't, then why make your life complicated and look
for problems where there
aren't any ?
As I wrote above - I'll try and get back to you.

Best regards,
Michael

Michael Beier
Team SIS OIOAW (Web Basis)

EnBW Systeme Infrastruktur Support GmbH
Durlacher Allee 93
76131 Karlsruhe

Tel.: +49 (7 21) 63 - 14545
Fax: +49 (7 21) 63 - 15099
mailto:***@enbw.com

EnBW Systeme Infrastruktur Support GmbH
Sitz der Gesellschaft: Karlsruhe
Handelsregister: Amtsgericht Mannheim ­ HRB 108550
Vorsitzender des Aufsichtsrats: Dr. Bernhard Beck
Geschäftsführer: Jochen Adenau, Hans-Günther Meier
Rainer Jung
2012-03-03 15:57:30 UTC
Permalink
Hallo Herr Beier,
Post by Beier Michael
Hi all,
we're running tomcat 7.0.23 on sun jdk 1.6.0_29, connected via ajp to httpd 2.2.21 using mod_jk 1.2.32.
I observed the behavior, that tomcat keeps threads in its ajp pool in keepalive state, regardless of which timeouts (connectionTimeout and keepAliveTimeout) are configured in tomcat.
I tested three connector configurations and with all I see connections in tomcat server status where the "Time" value amounts up to several million milliseconds, which is more than configured in connectionTimeout/keepAliveTimeout.
This results in having 60-80 percent of the thread pool being in state "keepAlive".
1)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" connectionTimeout="300000" />
2)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false"
keepAliveTimeout="300000" />
3)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343" tomcatAuthentication="false" />
In mod_jk the connection_pool_timeout is set to the same value as connectionTimeout (only in seconds, not milliseconds).
I verified that the values are set correctly querying the parameters via JMX.
How can I avoid having so many threads in keepalive state - I don't have any idea at the moment and can't see that there is an error in my configuration.
Educated guess: you have an interval based cping/cpong connection check
configured for mod_jk.

Any cping will wake up the thread waiting for data on the connection and
will reset the timeouts. But a cping will be ommediately answered by a
cpong and not update the "last request" time. So that would explain, why
your connections never timeout though the Manager shows constantly
increasing times for the last request seen.

Usually that feature would be activated for mo_jk using the
JkWatchdogInterval in combination with ping_mode "I" or "A". In case you
are unsure about the effects of the various jk configuration options,
you might post them here (remove sensitive data before posting).

I'd say the current behaviour is a bit problematic, but I don't see an
easy improvement. So if your focus is on keeping the number of idle
connections low you would need to switch off interval cpings. Cping
before rquests and after opening connections are fine (improves
stability and reduces the likeliness of race conditions).

HTH

Rainer Jung
Beier Michael
2012-03-05 13:06:46 UTC
Permalink
Hallo Herr Jung,
Post by Beier Michael
-----Ursprüngliche Nachricht-----
Hallo Herr Beier,
Post by Beier Michael
Hi all,
we're running tomcat 7.0.23 on sun jdk 1.6.0_29, connected
via ajp to httpd 2.2.21 using mod_jk 1.2.32.
Post by Beier Michael
I observed the behavior, that tomcat keeps threads in its
ajp pool in keepalive state, regardless of which timeouts
(connectionTimeout and keepAliveTimeout) are configured in tomcat.
Post by Beier Michael
I tested three connector configurations and with all I see
connections in tomcat server status where the "Time" value
amounts up to several million milliseconds, which is more than
configured in connectionTimeout/keepAliveTimeout.
Post by Beier Michael
This results in having 60-80 percent of the thread pool
being in state "keepAlive".
Post by Beier Michael
1)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343"
tomcatAuthentication="false"
Post by Beier Michael
keepAliveTimeout="300000"
connectionTimeout="300000" />
Post by Beier Michael
2)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343"
tomcatAuthentication="false"
Post by Beier Michael
keepAliveTimeout="300000" />
3)
<Connector port="8309" protocol="AJP/1.3"
maxThreads="200" redirectPort="8343"
tomcatAuthentication="false" />
Post by Beier Michael
In mod_jk the connection_pool_timeout is set to the same
value as connectionTimeout (only in seconds, not milliseconds).
Post by Beier Michael
I verified that the values are set correctly querying the
parameters via JMX.
Post by Beier Michael
How can I avoid having so many threads in keepalive state -
I don't have any idea at the moment and can't see that there
is an error in my configuration.
Educated guess: you have an interval based cping/cpong
connection check
configured for mod_jk.
You're right, that's the way cping/cpong was configured.
Post by Beier Michael
Any cping will wake up the thread waiting for data on the
connection and
will reset the timeouts. But a cping will be ommediately answered by a
cpong and not update the "last request" time. So that would
explain, why
your connections never timeout though the Manager shows constantly
increasing times for the last request seen.
OK, that's an important information on the value of "Time" in tomcat
server status. Maybe it should be added to jk worker docs and/or
tomcat manager-howto.
Post by Beier Michael
Usually that feature would be activated for mo_jk using the
JkWatchdogInterval in combination with ping_mode "I" or "A".
In case you
are unsure about the effects of the various jk configuration options,
you might post them here (remove sensitive data before posting).
I'd say the current behaviour is a bit problematic, but I don't see an
easy improvement. So if your focus is on keeping the number of idle
connections low you would need to switch off interval cpings. Cping
before rquests and after opening connections are fine (improves
stability and reduces the likeliness of race conditions).
I disabled interval cping by setting "ping_mode = C,P" instead of "A".
At the moment everything looks good and tomcat behaves as expected.

Thank's for your help!

Best regards,
Michael Beier

Team SIS OIOAW (Web Basis)

EnBW Systeme Infrastruktur Support GmbH
Durlacher Allee 93
76131 Karlsruhe

Tel.: +49 (7 21) 63 - 14545
Fax: +49 (7 21) 63 - 15099
mailto:***@enbw.com

EnBW Systeme Infrastruktur Support GmbH
Sitz der Gesellschaft: Karlsruhe
Handelsregister: Amtsgericht Mannheim ­ HRB 108550
Vorsitzender des Aufsichtsrats: Dr. Bernhard Beck
Geschäftsführer: Jochen Adenau, Hans-Günther Meier
marcobuc
2012-08-23 07:50:53 UTC
Permalink
Hi,
we are experiencing a very similar problem with the difference that we are
using mod_proxy_ajp instead of mod_jk to connect Apache with tomcat. As for
mod_jk, the connection is done to the 8009-jk port opened by a connector
configured in tomcat server.xml file.
<Connector port="8009"
enableLookups="false" redirectPort="8443" protocol="AJP/1.3"
/>

We tried configuring the timeout parameters for mod_proxy_ajp to tell Apache
to drop connection older than 2 minutes, but we see in tomcat manager
application that the jk-8009 connector retains Keepalive connections open
for millions of milliseconds:
K 1783874292 ms ? ? 84.18.132.114 ? ?

I would like to try configuring the ping_mode parameter but I do not know if
this is possible, i.e. if this parameter exists only for mod_jk.
Here an example of configuration we added in httpd.conf file for the
mod_proxy_ajp configuration.

ProxyPass /manager ajp://localhost:8009/manager max=10 retry=10 timeout=30
ttl=120
ProxyPassReverse /manager ajp://localhost:8009/manager

Thanks for any help,
Marco.




--
View this message in context: http://tomcat.10.n6.nabble.com/Too-many-connections-in-keepalive-state-in-jk-threadpool-tp4539290p4985585.html
Sent from the Tomcat - User mailing list archive at Nabble.com.
Rainer Jung
2012-08-23 08:47:23 UTC
Permalink
Post by marcobuc
Hi,
we are experiencing a very similar problem with the difference that we are
using mod_proxy_ajp instead of mod_jk to connect Apache with tomcat. As for
mod_jk, the connection is done to the 8009-jk port opened by a connector
configured in tomcat server.xml file.
<Connector port="8009"
enableLookups="false" redirectPort="8443" protocol="AJP/1.3"
/>
We tried configuring the timeout parameters for mod_proxy_ajp to tell Apache
to drop connection older than 2 minutes, but we see in tomcat manager
application that the jk-8009 connector retains Keepalive connections open
K 1783874292 ms ? ? 84.18.132.114 ? ?
Can you see the connections in the output of "netstat -an"?

What is there state there?
Post by marcobuc
I would like to try configuring the ping_mode parameter but I do not know if
this is possible, i.e. if this parameter exists only for mod_jk.
Here an example of configuration we added in httpd.conf file for the
mod_proxy_ajp configuration.
ProxyPass /manager ajp://localhost:8009/manager max=10 retry=10 timeout=30
ttl=120
ProxyPassReverse /manager ajp://localhost:8009/manager
Look for "ping" and "ttl" on

http://httpd.apache.org/docs/2.2/mod/mod_proxy.html

if using 2.2 or

http://httpd.apache.org/docs/2.4/mod/mod_proxy.html

if using httpd 2.4. Note that for 2.4 there was a connection closing bug
which was fixed very recently in 2.4.3.

Regards,

Rainer
marcobuc
2012-08-24 13:07:48 UTC
Permalink
Thank you for your reply Rainer,
with netstat -an I see a lot of connections in ESTABLISHED status on port
8009 coming from localhost so I think I can assume that those are the
connections established from Apache and Tomcat both residing on the same
machine. In any case on tomcat manager webapp I see all of them are in a K
state.

Googling and reading more carefully the documentation I came to this
discussion:
http://serverfault.com/questions/149171/keep-alive-header-not-sent-from-tomcat-5-5-http-connector

that moved my attention to the AJP connector configured in server.xml file
on tomcat. It seems the "connectionTimeout" parameter (doc says "The number
of milliseconds this Connector will wait, after accepting a connection, for
the request URI line to be presented. The default value is infinite (i.e. no
timeout).") is set to "infinite" and this affects the other parameter
"keepAliveTimeout" (doc says "The number of milliseconds this Connector will
wait for another AJP request before closing the connection. The default
value is to use the value that has been set for the connectionTimeout
attribute."). The keepAliveTimeout exists only for Tomcat 6+, for Tomcat 5.5
you can touch only connectionTimeout. I suppose that, without touching this
parameters, the connections remain open in K state also if they do not
receive a PING or a new request.
In any case, I tried to change values for this two parameters both in Tomcat
6.0 and Tomcat 5.5 and this seems to close the connections after the time
configured:

<Connector port="8009" protocol="AJP/1.3" redirectPort="8443"
connectionTimeout="10000" keepAliveTimeout="10000"
/>

I tried with short values (for example 10s) and long ones (300s) and
everything seems to work correctly. I also did some tests with a jsp page
that takes a lot of time to serve the response and also in that cases
everything is working fine. If the page takes more time to respond than the
timeout the connection is not closed. If the page takes more time to respond
than the TIMEOUT and TTL configured on the Apache side, the browser gets a
proxy timeout error but on the server (tomcat manager app) I see my page in
Service state, till it finishes all its work.
Now I see in tomcat manager app that the list of connections in the pool is
normally very short, as expected.

Thank you for your help,
Marco.





--
View this message in context: http://tomcat.10.n6.nabble.com/Too-many-connections-in-keepalive-state-in-jk-threadpool-tp4539290p4985607.html
Sent from the Tomcat - User mailing list archive at Nabble.com.

Loading...