Discussion:
org.apache.tomcat.jdbc.pool casting to original connection class
Alex O'Ree
2018-08-24 20:08:54 UTC
Permalink
I have a use case where i need to downcast a pooled database connection
down to the native class that is in use for the driver. Unfortunately I
don't see any APIs that I can use to do this. Is there any backdoors or
mechanisms I can use?

Background, I'm using postgres with tomcat 8.5 and need to access some
postgres specific apis, (the large binary object api). I've tried with both
the tomcat based pool and the dbp2 connection pooling too.
Torsten Krah
2018-08-24 21:04:53 UTC
Permalink
The isWrapperFor(..) and unwrap(..) methods on the connection API should
work for this.
Alex O'Ree
2018-08-26 14:04:30 UTC
Permalink
Perfect, thanks
Post by Torsten Krah
The isWrapperFor(..) and unwrap(..) methods on the connection API should
work for this.
Alex O'Ree
2018-08-27 13:47:40 UTC
Permalink
Unfortunately, it's not working. I've tried unwrap for both
org.postgresql.jdbc.PgConnection (concrete class) and
org.psotgresql.PGConnection (interface) and both of them fail to unwrap.
Any other suggestions?
Post by Alex O'Ree
Perfect, thanks
Post by Torsten Krah
The isWrapperFor(..) and unwrap(..) methods on the connection API should
work for this.
Alex O'Ree
2018-08-27 20:23:58 UTC
Permalink
I figured it out. Classpath issue. I had the postgres driver in my web app
and in tomcat's lib folder. Removing from the web app fixed it.
Post by Alex O'Ree
Unfortunately, it's not working. I've tried unwrap for both
org.postgresql.jdbc.PgConnection (concrete class) and
org.psotgresql.PGConnection (interface) and both of them fail to unwrap.
Any other suggestions?
Post by Alex O'Ree
Perfect, thanks
Post by Torsten Krah
The isWrapperFor(..) and unwrap(..) methods on the connection API should
work for this.
Christopher Schultz
2018-08-27 23:19:03 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Alex,
Post by Alex O'Ree
I have a use case where i need to downcast a pooled database
connection down to the native class that is in use for the driver.
Unfortunately I don't see any APIs that I can use to do this. Is
there any backdoors or mechanisms I can use?
Background, I'm using postgres with tomcat 8.5 and need to access
some postgres specific apis, (the large binary object api). I've
tried with both the tomcat based pool and the dbp2 connection
pooling too.
What's wrong with the existing JDBC APIs for working with BLOBs? It's
been a really long time since anyone has really had to down-cast JDBC
interface types to their implementation types to get around
deficiencies in the APIs.

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluEhucACgkQHPApP6U8
pFh0WA//X5aXgSlXTvwzOnlJdLVwm1Mpu/6FzHycBgK0S4MGlLggNvqdYs/p+ksz
I6m1vRVPifozU6C7ljAzjOmquN5kG6r00fp4JwK9vehVLBbg991LyGPQe30mQytk
S6b8qtFcBoAobP0j6KE29adSGJ8mwSAXcPE22NqBPlkj5vA1LwyNxxzgFVIL35lJ
M5z6cuEfJDcD4pj4azLzdG63y1hWeSyVdRyNC4YaB7vD74+1p42EWoR37djtWzbg
uvbM/OtsgGZsLeTWNSsR/eiupa0tgnbl99YNIIoSYyYKscS7e00sMs/vV9QxgXYL
VfQSL3RJMQPmkl35mmyLx7tV0ztmNORpeSLKZM0SQBUTyoVTpm5hB5z7SyO9bxo3
CUOUpKYQsf1z0BIxTgzwBa+Cs97Qpt/Bx8x/3goyIsAhlGWc+07KhzB0qvAz/vv6
Pzc4GeLttvGHTeX3thSITEfffLNxydnrAZ0gpU/mZl0a64hIJZ9XcGCwU42ZPgaH
CPEK9XmPC409zSIW0qMW5fS3dbIag8H7xxrFweu3aXUx1/jsEav0jHcR2J9NZMj2
SA6qEM0NO+n5MjYat5KgD1G/kT7uDN4efKjakdApJLRy5mlAoe2z9gfcPozLu2aG
T8SEAvTxd6//sF57l3O0qudXtLSHS4S14QHQsefC+AS8gL4h12Q=
=D95E
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-***@tomcat.apache.org
For additional commands, e-mail: users-***@tomcat.apache.org
Alex O'Ree
2018-08-28 00:47:40 UTC
Permalink
I'm storing large files. Postgres has a limit for blobs and uses a
different api for larger stuff. Cut off is 1gb

On Mon, Aug 27, 2018, 7:19 PM Christopher Schultz <
Post by Christopher Schultz
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Alex,
Post by Alex O'Ree
I have a use case where i need to downcast a pooled database
connection down to the native class that is in use for the driver.
Unfortunately I don't see any APIs that I can use to do this. Is
there any backdoors or mechanisms I can use?
Background, I'm using postgres with tomcat 8.5 and need to access
some postgres specific apis, (the large binary object api). I've
tried with both the tomcat based pool and the dbp2 connection
pooling too.
What's wrong with the existing JDBC APIs for working with BLOBs? It's
been a really long time since anyone has really had to down-cast JDBC
interface types to their implementation types to get around
deficiencies in the APIs.
- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluEhucACgkQHPApP6U8
pFh0WA//X5aXgSlXTvwzOnlJdLVwm1Mpu/6FzHycBgK0S4MGlLggNvqdYs/p+ksz
I6m1vRVPifozU6C7ljAzjOmquN5kG6r00fp4JwK9vehVLBbg991LyGPQe30mQytk
S6b8qtFcBoAobP0j6KE29adSGJ8mwSAXcPE22NqBPlkj5vA1LwyNxxzgFVIL35lJ
M5z6cuEfJDcD4pj4azLzdG63y1hWeSyVdRyNC4YaB7vD74+1p42EWoR37djtWzbg
uvbM/OtsgGZsLeTWNSsR/eiupa0tgnbl99YNIIoSYyYKscS7e00sMs/vV9QxgXYL
VfQSL3RJMQPmkl35mmyLx7tV0ztmNORpeSLKZM0SQBUTyoVTpm5hB5z7SyO9bxo3
CUOUpKYQsf1z0BIxTgzwBa+Cs97Qpt/Bx8x/3goyIsAhlGWc+07KhzB0qvAz/vv6
Pzc4GeLttvGHTeX3thSITEfffLNxydnrAZ0gpU/mZl0a64hIJZ9XcGCwU42ZPgaH
CPEK9XmPC409zSIW0qMW5fS3dbIag8H7xxrFweu3aXUx1/jsEav0jHcR2J9NZMj2
SA6qEM0NO+n5MjYat5KgD1G/kT7uDN4efKjakdApJLRy5mlAoe2z9gfcPozLu2aG
T8SEAvTxd6//sF57l3O0qudXtLSHS4S14QHQsefC+AS8gL4h12Q=
=D95E
-----END PGP SIGNATURE-----
---------------------------------------------------------------------
Christopher Schultz
2018-08-28 13:21:20 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Alex,
Post by Alex O'Ree
On Mon, Aug 27, 2018, 7:19 PM Christopher Schultz <
Post by Christopher Schultz
Post by Alex O'Ree
I have a use case where i need to downcast a pooled database
connection down to the native class that is in use for the
driver. Unfortunately I don't see any APIs that I can use to
do this. Is there any backdoors or mechanisms I can use?
Background, I'm using postgres with tomcat 8.5 and need to
access some postgres specific apis, (the large binary object
api). I've tried with both the tomcat based pool and the dbp2
connection pooling too.
What's wrong with the existing JDBC APIs for working with BLOBs?
It's been a really long time since anyone has really had to
down-cast JDBC interface types to their implementation types to
get around deficiencies in the APIs.
I'm storing large files. Postgres has a limit for blobs and uses a
different api for larger stuff. Cut off is 1gb
Interesting. I wonder why Postgres decided not to support that through
the standard JDBC API.

Have you tried setting the "compatible" flag on the connection just to
see if it works for you?

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluFTFAACgkQHPApP6U8
pFj9Bw//dowFYiOISbNuF4v9wKkP+QqH9duSGtEPcQ/Og17YagGi4Soci1kI9xde
JTIi4ie95Xd5Sa0W5ns/geqFD2qQA5GmVbb6GfwRHn+MRslChnATHB842+mmLHE7
9tOfXfuMmlHj1//rAL8f23y6wcFq2NfTZNLxL+VwI3hCbv/hxJtA8jIToBn7kqBm
cF7aA/StfpVwTFKY+r81G8rvxuD+CXuoASsDRHJl6h3bjqnu+iDvtqL1UBBnoUo4
K/GUbKvbYoex2zPgIyqnZM0DuVUtITuYpzZLfT8Yy1rVvXj7YcPZBsP+nC6B0pcQ
X+N4770oUMZ9gxlrnKq3kyryuEwsX5Pg8dHcKUPp1iD6df5qajsQx03hKI9OgIaq
Wamovoj3xvHR9UFvxaGO/7iykMK5gYc3HsoWM7TCxk0Plfm3kLJWMv7SQQmLZVLB
dg5iMy43Ne2J9tYJUe9O0tp6w7CJYtj77+ngbhuUmZlct+nGS/GRkGAak/GkAgRZ
tkmpy6UqP6ExPPATYesmy8yinLh09HJjXmkXksksdrM1M2H9vH9NxpFDf4Yu3kne
2HXIAun1q7UVKiZ831u0dyPHHcpzEfKyB5U18vFctrxHardBAabLaB0BMQeSvs/1
UtUafOomMvRqxisKZExBRUoO2Q92XIpJIYiUkbA9S35/ZPozckE=
=IbSz
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-***@tomcat.apache.org
For additional commands, e-mail: users-***@tomcat.apache.org
Torsten Krah
2018-08-28 13:27:56 UTC
Permalink
Post by Christopher Schultz
Interesting. I wonder why Postgres decided not to support that
through
the standard JDBC API.
Have you tried setting the "compatible" flag on the connection just to
see if it works for you?
I am curious - afaik the standard JDBC API does not support e.g. to use
seek() or truncate() on the BLOB to fast forward to some position in
the file - how would you do that with the standard API Christopher -
maybe i've missed that feature?

The LargeObject API does support this.

kind regards

Torsten
Christopher Schultz
2018-08-28 14:18:36 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Torsten,
Am Dienstag, den 28.08.2018, 09:21 -0400 schrieb Christopher
Post by Christopher Schultz
Interesting. I wonder why Postgres decided not to support that
through the standard JDBC API.
Have you tried setting the "compatible" flag on the connection
just to see if it works for you?
I am curious - afaik the standard JDBC API does not support e.g. to
use seek() or truncate() on the BLOB to fast forward to some
position in the file - how would you do that with the standard API
Christopher - maybe i've missed that feature?
The LargeObject API does support this.
I've never tried doing random-access reads into BLOBs in a database
before... that seems like an architectural mistake to me.

But if I had to, I'd try to:

ResultSet rs = ...;
Blob blob = rs.getBlob("bigfield");

// Take a chunk out of the middle
byte[] chunk = blob.getBytes(start, length);
// or
InputStream readFromMiddle = blob.getBinaryStream(strart, length);

// Truncate the blob
blob.truncate(newEndPos);

The Blob API isn't really that extensive... I'm not sure how you could
have missed those methods in there. It's really all there is.

Now... if Postgres artificially places a 1GiB limit on everything that
goes through that API, well, then you are kind of screwed.

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluFWbwACgkQHPApP6U8
pFgpoxAAtWj6cROWQXgw/r6H/OeoHnOZI262JqajXi7L3QycYzVwNENNQ1v+fKNS
AegRgAUTDnw0KBk2hPkZHaCEcNIuzshvxA9Vs2ptLynwEjoGXD17+ZXR85uy4Iq/
bfmnhSAbDRiNg9VoTFw4Vmbx+dNXZUT8jfkGdWWvTH/27yws72vXSLH/bhnAa/IU
R1eNqmWGbBIQiQaWk2YhSDKWfuQoNUXxiJPYR07u4MhtxGctPHN7nudZb2ZjFGaF
qCidhoE6/tuNzkpLyMvrvMGHGCR3mP9Vc1UyHtwwckPTCV+rctV2MfpD7RG00K+m
ABlhCpkw7n0YEGFe2GQet7xF5dB8aLtcv/XimE0/rkQzhzwMkFtRLS30ONwoCiPh
ID3gLWOmLaAtlCjgO3/FWZbYqxs2yZYyO3xerukHoji4bUYPwfa1otJBQwrjVgZI
OQGx4Wy173fjRsIXxdUzgw5hpwocLodtu/U+NGWWpTNNq83/1hw8hEaMHl3p729E
qifq7DDAnws5nlchr2Njf24tScmNtqw1IZV5T2MU3LDvrhi3t5RukuMe5sSyl6S6
wEqVfSVHNP/IP7XxtpalwbKSiU0D0kwTA9iBmOPA+aQfD05kFKAD9wT9YHgF5wAw
NGFZBgIaB374sJWZgMQRl/dfHNrrfXV9Lk7OnHk+vXsrakd2O24=
=6hok
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-***@tomcat.apache.org
For additional commands, e-mail: users-***@tomcat.apache.org
Alex O'Ree
2018-08-28 15:00:36 UTC
Permalink
From what i understand, the postgres jdbc driver does support
reading/writing from a result set or command via a input or output stream,
however from my testing, it looks like it just buffers the whole thing in
memory. I actually had one case where i was able to insert 1.2GB of content
into a bytea column, but could not retrieve it. Something about invalid
allocation size. Postgres's limit is 1GB and they want you to use the large
object api. I think it stores it the file outside of the normal database
table file.

On Tue, Aug 28, 2018 at 10:18 AM Christopher Schultz <
Post by Christopher Schultz
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Torsten,
Am Dienstag, den 28.08.2018, 09:21 -0400 schrieb Christopher
Post by Christopher Schultz
Interesting. I wonder why Postgres decided not to support that
through the standard JDBC API.
Have you tried setting the "compatible" flag on the connection
just to see if it works for you?
I am curious - afaik the standard JDBC API does not support e.g. to
use seek() or truncate() on the BLOB to fast forward to some
position in the file - how would you do that with the standard API
Christopher - maybe i've missed that feature?
The LargeObject API does support this.
I've never tried doing random-access reads into BLOBs in a database
before... that seems like an architectural mistake to me.
ResultSet rs = ...;
Blob blob = rs.getBlob("bigfield");
// Take a chunk out of the middle
byte[] chunk = blob.getBytes(start, length);
// or
InputStream readFromMiddle = blob.getBinaryStream(strart, length);
// Truncate the blob
blob.truncate(newEndPos);
The Blob API isn't really that extensive... I'm not sure how you could
have missed those methods in there. It's really all there is.
Now... if Postgres artificially places a 1GiB limit on everything that
goes through that API, well, then you are kind of screwed.
- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluFWbwACgkQHPApP6U8
pFgpoxAAtWj6cROWQXgw/r6H/OeoHnOZI262JqajXi7L3QycYzVwNENNQ1v+fKNS
AegRgAUTDnw0KBk2hPkZHaCEcNIuzshvxA9Vs2ptLynwEjoGXD17+ZXR85uy4Iq/
bfmnhSAbDRiNg9VoTFw4Vmbx+dNXZUT8jfkGdWWvTH/27yws72vXSLH/bhnAa/IU
R1eNqmWGbBIQiQaWk2YhSDKWfuQoNUXxiJPYR07u4MhtxGctPHN7nudZb2ZjFGaF
qCidhoE6/tuNzkpLyMvrvMGHGCR3mP9Vc1UyHtwwckPTCV+rctV2MfpD7RG00K+m
ABlhCpkw7n0YEGFe2GQet7xF5dB8aLtcv/XimE0/rkQzhzwMkFtRLS30ONwoCiPh
ID3gLWOmLaAtlCjgO3/FWZbYqxs2yZYyO3xerukHoji4bUYPwfa1otJBQwrjVgZI
OQGx4Wy173fjRsIXxdUzgw5hpwocLodtu/U+NGWWpTNNq83/1hw8hEaMHl3p729E
qifq7DDAnws5nlchr2Njf24tScmNtqw1IZV5T2MU3LDvrhi3t5RukuMe5sSyl6S6
wEqVfSVHNP/IP7XxtpalwbKSiU0D0kwTA9iBmOPA+aQfD05kFKAD9wT9YHgF5wAw
NGFZBgIaB374sJWZgMQRl/dfHNrrfXV9Lk7OnHk+vXsrakd2O24=
=6hok
-----END PGP SIGNATURE-----
---------------------------------------------------------------------
Christopher Schultz
2018-08-28 15:36:55 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Alex,
Post by Alex O'Ree
Post by Alex O'Ree
From what i understand, the postgres jdbc driver does support
reading/writing from a result set or command via a input or output
stream, however from my testing, it looks like it just buffers the
whole thing in memory.
Yikes. I guess that's what their LargeObject API is intended to avoid.
It's frustrating that they don't provide the same facility through the
standard API. It doesn't seem like it would be that hard to do.
Post by Alex O'Ree
I actually had one case where i was able to insert 1.2GB of
content into a bytea column, but could not retrieve it. Something
about invalid allocation size. Postgres's limit is 1GB and they
want you to use the large object api. I think it stores it the file
outside of the normal database table file.
It does. I did a little reading about it this morning because I was
interested. Evidently, those large-objects are stored in a way that
provides absolutely zero authorization-checks against authenticated
users. So if you can log in to the database, you can read all those
files. So if this is a shared database or if you don't 100% trust
every db-user, you might want to think about adding encryption to
those files.

It's unclear to me if they can be modified by other users as well. If
they can be modified but aren't otherwise sensitive, you might want to
add some kind of binary signature to the host table to ensure that the
data hasn't been tampered-with when you weren't looking.

- -chris
Post by Alex O'Ree
On Tue, Aug 28, 2018 at 10:18 AM Christopher Schultz <
Torsten,
Post by Alex O'Ree
Post by Torsten Krah
Post by Christopher Schultz
Interesting. I wonder why Postgres decided not to support
that through the standard JDBC API.
Have you tried setting the "compatible" flag on the
connection just to see if it works for you?
I am curious - afaik the standard JDBC API does not support
e.g. to use seek() or truncate() on the BLOB to fast forward
to some position in the file - how would you do that with the
standard API Christopher - maybe i've missed that feature?
The LargeObject API does support this.
I've never tried doing random-access reads into BLOBs in a
database before... that seems like an architectural mistake to me.
ResultSet rs = ...; Blob blob = rs.getBlob("bigfield");
// Take a chunk out of the middle byte[] chunk =
blob.getBytes(start, length); // or InputStream readFromMiddle =
blob.getBinaryStream(strart, length);
// Truncate the blob blob.truncate(newEndPos);
The Blob API isn't really that extensive... I'm not sure how you
could have missed those methods in there. It's really all there
is.
Now... if Postgres artificially places a 1GiB limit on everything
that goes through that API, well, then you are kind of screwed.
-chris
Post by Alex O'Ree
---------------------------------------------------------------------
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluFbBYACgkQHPApP6U8
pFjr7BAApOj8ow7LxZqqbEhMi6FcYTLQ9kTfXyGyhLPSifKktcSJzZ4dWI4tflZO
zgg6Dhe2eomsXErQ2gVOIVXQmJKRPhO0nnaG5IDkZzSZZNZsQHPrBsObZCpUyf04
rnadkfkt+L4FJ1n5/fdJ+SbY5DNjDaKCJ090j7h3BErbMSz+GAIo1gq6aRHu4Nn/
9vQoT19K3s67DWkvZLPMo+UjprCKBmajYUfqJ2370q4wMEaz8+tg8e63h+FuFnFi
gQiPYE06yfNP+h3IVJcu7aosCfEAgbdjdRaRfrAb2/1NEcUfinuFBp1mQT3ltPje
FYkfz74+JCTWUowX5JLnQWyx3xXzVWWqXNzKUE6IHke2Kaa+x30bIjQNXwH9Sw4u
nAmZm7SfybOZw3UmRbNXPGjSkYYlKe6GUQIEcGvskprsP6r4jqEYNWaVtgBIP7vt
nSvlxIhiBOXChCs0fgGQMXPH01nKEHbg/AKn41lnf9kP842vJ3LPknXJHgDur9PM
MMF9qxtC/BjR99juT7fwL4nBOt5w3WXsLYLvsEtJR2w3JKAZdnll/e6uUGOf+zmQ
GZs6Ra23j3J12S0E9/a8j4GuZP7dhBFmimuxSgxKIEccxxmkgHd+Wbh1d+oTiDEB
QLrAfxD4406Z1SDro/AVq+b2KigjMgj+hWQsrAcJdLZP4OYV36c=
=Y+CN
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-***@tomcat.apache.org
For additional commands, e-mail: users-***@tomcat.apache.org

Loading...